On Tue, Apr 08, 2025 at 11:38:31AM +0200, Phil Sutter wrote:
> On Tue, Apr 08, 2025 at 08:16:51AM +, Hangbin Liu wrote:
> > Convert the selftest to nft as it is the replacement for iptables, which
> > is used by default in most releases.
> >
> > Signed-off-by: Han
Hi,
On Tue, Apr 08, 2025 at 08:16:51AM +, Hangbin Liu wrote:
> Convert the selftest to nft as it is the replacement for iptables, which
> is used by default in most releases.
>
> Signed-off-by: Hangbin Liu
What are the changes since v5, please?
Thanks, Phil
ut priority filter \;
> policy accept \; }
You may skip the 'policy accept \;' part in all 'add chain' commands as
this is the default for all chains. Unless you prefer to explicitly
state the chain policy, of course.
Cheers, Phil
On Fri, Mar 21, 2025 at 12:45:17PM +, Hangbin Liu wrote:
> On Fri, Mar 21, 2025 at 12:42:42PM +0100, Phil Sutter wrote:
> > Hi Hangbin,
> >
> > On Fri, Mar 21, 2025 at 10:40:25AM +, Hangbin Liu wrote:
> > > Hi Jason, Phil,
> > > On Wed, Mar 19, 2025
Hi Hangbin,
On Fri, Mar 21, 2025 at 10:40:25AM +, Hangbin Liu wrote:
> Hi Jason, Phil,
> On Wed, Mar 19, 2025 at 05:15:41PM +0100, Jason A. Donenfeld wrote:
> > On Mon, Jan 06, 2025 at 08:10:43AM +, Hangbin Liu wrote:
> > > + echo "file /bin/nft $(NFTA
ou may find proper details about the failure in config.log. My guess is
the cross build prevents host libraries from being used. (No idea why
your manual call works, though.)
> But I can config it manually like: ./configure --prefix=/
> --build=x86_64-redhat-linux --host=x86_64-linux-musl --enable-static
> --disable-shared correctly
>
> Do you have any idea?
You may just pass '--with-mini-gmp' to nftables' configure call to avoid
the external dependency.
Cheers, Phil
; policy accept \; }
> + n0 nft add rule inet filter INPUT meta length 1360 counter drop
> +else
> + n0 iptables -A INPUT -m length --length 1360 -j DROP
> +fi
> n1 ip route add 192.168.241.2/32 dev wg0 mtu 1299
> n2 ip route add 192.168.241.1/32 dev wg0 mtu 1299
> n2 ping -c 1 -W 1 -s 1269 192.168.241.1
> n2 ip route delete 192.168.241.1/32 dev wg0 mtu 1299
> n1 ip route delete 192.168.241.2/32 dev wg0 mtu 1299
> -n0 iptables -F INPUT
> +if use_nft; then
> + n0 nft delete table inet filter
Here just flush the table (drops only the rules):
| n0 nft flush table ip wgtest
Cheers, Phil
On Wed, May 15, 2024 at 01:06:13PM +0100 Qais Yousef wrote:
> On 05/15/24 07:20, Phil Auld wrote:
> > On Wed, May 15, 2024 at 10:32:38AM +0200 Peter Zijlstra wrote:
> > > On Tue, May 14, 2024 at 07:58:51PM -0400, Phil Auld wrote:
> > > >
> > > > Hi Qais
On Wed, May 15, 2024 at 10:32:38AM +0200 Peter Zijlstra wrote:
> On Tue, May 14, 2024 at 07:58:51PM -0400, Phil Auld wrote:
> >
> > Hi Qais,
> >
> > On Wed, May 15, 2024 at 12:41:12AM +0100 Qais Yousef wrote:
> > > rt_task() checks if a task has RT priority.
ould mean this stays as it was but this
change makes sense as you have written it too.
Cheers,
Phil
>
> No functional changes were intended.
>
> [1]
> https://lore.kernel.org/lkml/20240506100509.gl40...@noisy.programming.kicks-ass.net/
>
> Signed-off-by: Qais Y
On Mon, Apr 19, 2021 at 06:17:47PM +0100 Valentin Schneider wrote:
> On 19/04/21 08:59, Phil Auld wrote:
> > On Fri, Apr 16, 2021 at 10:43:38AM +0100 Valentin Schneider wrote:
> >> On 15/04/21 16:39, Rik van Riel wrote:
> >> > On Thu, 2021-04-15 at 18:58 +
&&
> >> + !migrate_degrades_capacity(p, env))
> >> + tsk_cache_hot = 0;
> >
> > ... I'm starting to wonder if we should not rename the
> > tsk_cache_hot variable to something else to make this
> > code more readable. Probably in another patch :)
> >
>
> I'd tend to agree, but naming is hard. "migration_harmful" ?
I thought Rik meant tsk_cache_hot, for which I'd suggest at least
buying a vowel and putting an 'a' in there :)
Cheers,
Phil
>
> > --
> > All Rights Reversed.
>
--
un-time between running or stopped auditd, at least for
large rulesets. Individual calls suffer from added audit logging, but
that's expected of course.
Tested-by: Phil Sutter
Thanks, Phil
On Thu, Mar 18, 2021 at 02:37:03PM -0400, Richard Guy Briggs wrote:
> On 2021-03-18 17:30, Phil Sutter wrote:
[...]
> > Why did you leave the object-related logs in place? They should reappear
> > at commit time just like chains and sets for instance, no?
>
> There are
table->handle);
> + net->nft.base_seq);
>
> audit_log_nfcfg(buf,
> family,
Why did you leave the object-related logs in place? They should reappear
at commit time just like chains and sets for instance, no?
Thanks, Phil
> In this sense, I suggest limit burst buffer to 16 times of quota or around.
> That should be enough for users to
> improve tail latency caused by throttling. And users might choose a smaller
> one or even none, if the interference
> is unacceptable. What do you think?
>
Having quotas that can regularly be exceeded by 16 times seems to make the
concept of a quota
meaningless. I'd have thought a burst would be some small percentage.
What if several such containers burst at the same time? Can't that lead to
overcommit that can effect
other well-behaved containers?
Cheers,
Phil
--
ication per transaction then? I guess
Florian sufficiently illustrated how this would be implemented.
> Hope this helps...
It does, thanks a lot for the information!
Thanks, Phil
ipset IMHO.
Unlike nft monitor, auditd is not designed to be disabled "at will". So
turning it off for performance-critical workloads is no option.
Cheers, Phil
Nicolas,
On Tue, 9 Feb 2021 at 14:00, Nicolas Saenz Julienne
wrote:
>
> On Tue, 2021-02-09 at 13:19 +, Phil Elwell wrote:
> > Hi Nicolas,
> >
> > On Tue, 9 Feb 2021 at 13:00, Nicolas Saenz Julienne
> > wrote:
> > >
> > > In BCM2711 the n
13 +633,22 @@ static int bcm2835_power_probe(struct platform_device
> *pdev)
> power->dev = dev;
> power->base = pm->base;
> power->rpivid_asb = pm->rpivid_asb;
> + power->argon_asb = pm->argon_asb;
>
> - id = ASB_READ(ASB_AXI_BRDG_ID);
> + id = ASB_READ(ASB_AXI_BRDG_ID, false);
> if (id != 0x62726467 /* "BRDG" */) {
> - dev_err(dev, "ASB register ID returned 0x%08x\n", id);
> + dev_err(dev, "RPiVid ASB register ID returned 0x%08x\n", id);
> return -ENODEV;
> }
>
> + if (pm->argon_asb) {
> + id = ASB_READ(ASB_AXI_BRDG_ID, true);
> + if (id != 0x62726467 /* "BRDG" */) {
> + dev_err(dev, "Argon ASB register ID returned
> 0x%08x\n", id);
> + return -ENODEV;
> + }
> + }
> +
Surely these are the same register. Is this the result of a bad merge?
Thanks,
Phil
low up the audit log.
But we'd like to hear alternatives.
On Wed, 2021-02-03 at 18:57 +, Daniel Walker (danielwa) wrote:
> On Tue, Feb 02, 2021 at 04:44:47PM -0500, Paul Moore wrote:
> > On Tue, Feb 2, 2021 at 4:29 PM Daniel Walker <
> > danie...@cisco.com
&g
On Tue, Jan 05, 2021 at 12:41:04AM +0100, Arnd Bergmann wrote:
> Phil Oester reported that a fix for a possible buffer overrun that I
> sent caused a regression that manifests in this output:
>
> Event Message: A PCI parity error was detected on a component at bus 0
> devi
our patch and it resolves the regression. It does not
trigger the warning message you added.
Phil
ice 5
function 0.
Severity: Critical
Message ID: PCI1308
I reverted this single patch and the errors went away.
Thoughts?
Phil Oester
On Mon, Nov 09, 2020 at 03:38:15PM + Mel Gorman wrote:
> On Mon, Nov 09, 2020 at 10:24:11AM -0500, Phil Auld wrote:
> > Hi,
> >
> > On Fri, Nov 06, 2020 at 04:00:10PM + Mel Gorman wrote:
> > > On Fri, Nov 06, 2020 at 02:33:56PM +0100, Vincent Guittot wrote:
hackbench latency on the same EPYC
first gen servers.
As I mentioned earlier in the thread we have all the 5.9 patches in this area
in our development distro kernel (plus a handful from 5.10-rc) and don't see
the same effect we see here between 5.8 and 5.9 caused by this patch. But
there are other variables there. We've queued up a comparison between that
kernel and one with just the patch in question reverted. That may tell us
if there is an effect that is otherwise being masked.
Jirka - feel free to correct me if I mis-summarized your results :)
Cheers,
Phil
--
e load balancing improvements and some minor overall perf
gains in a few places, but generally did not see any difference from before
the commit mentioned here.
I'm wondering, Mel, if you have compared 5.10-rc1?
We don't have everything though so it's possible something we have
n
;
raw_spin_unlock_irqrestore(&cfs_b->lock, flags);
It's just a leftover. I agree that if it was there for some other
purpose that it would really need a comment. In this case, it's an
artifact of patch-based development I think.
Cheers,
Phil
> avid
>
> -
> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1
> 1PT, UK
> Registration No: 1397386 (Wales)
>
--
How are you doing today I have a proposal which i think may interest you and
benefit you.I will like to give you full details of this via email:
gerradfinancialplann...@gmail.com
Thanks.
John PHIL
> @@ -5105,9 +5105,6 @@ static void do_sched_cfs_slack_timer(struct
> cfs_bandwidth *cfs_b)
> return;
>
> distribute_cfs_runtime(cfs_b);
> -
> - raw_spin_lock_irqsave(&cfs_b->lock, flags);
> - raw_spin_unlock_irqrestore(&cfs_b->lock, flags);
> }
>
> /*
> --
> 2.29.0
>
>
Nice :)
Reviewed-by: Phil Auld
--
ee to advise on any corrections or improvements that can be
> made.
Thanks for these. I wonder, though, if it would not make more sense
to post these changes as comments on the original as-yet-unmerged
patches that you are fixing up?
Cheers,
Phil
>
> John B. Wyatt IV (8):
> sched
ault resulting in unfair finger pointing at one company's test
> team. If at least two distos check it out and it still goes wrong, at
> least there will be shared blame :/
>
> > > Other distros assuming they're watching can nominate their own victim.
> >
> >
erally equivalent to SLAB in terms of performance. Block
> > multiqueue also had vaguely similar issues before the default changes
> > and a period of time before it was removed removed (example whinging mail
> > https://lore.kernel.org/lkml/20170803085115.r2jfz2lofy5sp...@techsingularity.net/)
> > It's schedutil's turn :P
> >
>
Agreed. I'd like the option to switch back if we make the default change.
It's on the table and I'd like to be able to go that way.
Cheers,
Phil
--
n-Kuang Hu wrote:
> Hi, Phil:
>
> Phil Chang 於 2020年10月4日 週日 下午1:51寫道:
> >
> > Certain SoCs need to support large amount of reserved memory
> > regions, especially to follow the GKI rules from Google.
> > In MTK new SoC requires more than 68 regions of reserved m
ff-by: Joe Liu
Signed-off-by: YJ Chiang
Signed-off-by: Alix Wu
Signed-off-by: Phil Chang
---
drivers/of/of_reserved_mem.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
index 46b9371c8a33..595f0741dcef 100644
_load_avg(cfs_rq, false);
> + update_tg_load_avg(cfs_rq);
> propagate_entity_cfs_rq(se);
> }
>
> @@ -10805,7 +10804,7 @@ static void attach_entity_cfs_rq(struct sched_entity
> *se)
> /* Synchronize entity with its cfs_rq */
> update_load_avg(cfs_rq, se, sched_feat(ATTACH_AGE_LOAD) ? 0 :
> SKIP_AGE_LOAD);
> attach_entity_load_avg(cfs_rq, se);
> - update_tg_load_avg(cfs_rq, false);
> + update_tg_load_avg(cfs_rq);
> propagate_entity_cfs_rq(se);
> }
>
> --
> 2.17.1
>
LGTM,
Reviewed-by: Phil Auld
--
On Thu, Sep 24, 2020 at 10:43:12AM -0700 Tim Chen wrote:
>
>
> On 9/24/20 10:13 AM, Phil Auld wrote:
> > On Thu, Sep 24, 2020 at 09:37:33AM -0700 Tim Chen wrote:
> >>
> >>
> >> On 9/22/20 12:14 AM, Vincent Guittot wrote:
> >>
> >>>
On Thu, Sep 24, 2020 at 09:37:33AM -0700 Tim Chen wrote:
>
>
> On 9/22/20 12:14 AM, Vincent Guittot wrote:
>
> >>
>
> And a quick test with hackbench on my octo cores arm64 gives for 12
>
> Vincent,
>
> Is it octo (=10) or octa (=8) cores on a single socket for your system?
In what
Actually, In a embedded system with 3GB memory, the memory bus width is not the
same among the 3GB.
(The first 2GB is 48-bit wide, and the latter 1GB is 16-bit wide.)
For memory throughput reason of hardware IPs, we need allocate memory from the
first 2GB for
the hardware IPs. And that is why we
,
> gfp_zone(GFP_HIGHUSER),
> @@ -2516,6 +2520,7 @@ int mpol_misplaced(struct page *page, struct
> vm_area_struct *vma, unsigned long
>
> /* Migrate the page towards the node whose CPU is referencing it */
> if (pol->flags & MPOL_F_MORON) {
> +moron:
> polnid = thisnid;
>
> if (!should_numa_migrate_memory(current, page, curnid, thiscpu))
> --
> 2.28.0
>
Cheers,
Phil
--
On Fri, Sep 18, 2020 at 12:39:28PM -0400 Phil Auld wrote:
> Hi Peter,
>
> On Mon, Sep 14, 2020 at 01:42:02PM +0200 pet...@infradead.org wrote:
> > On Mon, Sep 14, 2020 at 12:03:36PM +0200, Vincent Guittot wrote:
> > > Vincent Guittot (4):
> > > sched/fair: relax
inimal imbalance threshold
> > sched/fair: minimize concurrent LBs between domain level
> > sched/fair: reduce busy load balance interval
>
> I see nothing objectionable there, a little more testing can't hurt, but
> I'm tempted to apply them.
>
> Phil,
of architecture
Signed-off-by: Alix Wu
Signed-off-by: YJ Chiang
Signed-off-by: Phil Chang
---
Hi
supplement the reason of this usage.
Thanks.
.../admin-guide/kernel-parameters.txt | 3 +++
arch/arm64/include/asm/memory.h | 2 ++
arch/arm64/mm/init.c
this patch allowing the DMA32 zone be configurable in ARM64.
Signed-off-by: Alix Wu
Signed-off-by: YJ Chiang
Signed-off-by: Phil Chang
---
For some devices, the main memory split into 2 part due to the memory
architecture, the efficient and less inefficient part.
One of the use case is fine
Allowing the DMA32 zone be configurable in ARM64 but at most 4Gb.
Signed-off-by: Alix Wu
Signed-off-by: YJ Chiang
Signed-off-by: Phil Chang
---
.../admin-guide/kernel-parameters.txt | 3 ++
arch/arm64/include/asm/memory.h | 2 +
arch/arm64/mm/init.c
ance threshold
> > sched/fair: minimize concurrent LBs between domain level
> > sched/fair: reduce busy load balance interval
>
> I see nothing objectionable there, a little more testing can't hurt, but
> I'm tempted to apply them.
>
> Phil, Mel, any chance
Hi Quais,
On Mon, Sep 07, 2020 at 12:02:24PM +0100 Qais Yousef wrote:
> On 09/02/20 09:54, Phil Auld wrote:
> > >
> > > I think this decoupling is not necessary. The natural place for those
> > > scheduler trace_event based on trace_points extension files is
&g
a possibility for it. However:
> >
> > "Cpusets provide a Linux kernel mechanism to constrain which CPUs and
> > Memory Nodes are used by a process or set of processes.
> >
> > The Linux kernel already has a pair of mechanisms to specify on which
> > CPUs a task may be scheduled (sched_setaffinity) and on which Memory
> > Nodes it may obtain memory (mbind, set_mempolicy).
> >
> > Cpusets extends these two mechanisms as follows:"
> >
> > The isolation flags do not necessarily have anything to do with
> > tasks, but with CPUs: a given feature is disabled or enabled on a
> > given CPU.
> > No?
>
> One cpumask per feature, implemented separately in sysfs, also
> seems OK (modulo documentation about the RCU update and users
> of the previous versions).
>
> This is what is being done for rcu_nocbs= already...
>
exclusive cpusets is used now to control scheduler load balancing on
a group of cpus. It seems to me that this is the same idea and is part
of the isolation concept. Having a toggle for each subsystem/feature in
cpusets could provide the needed userspace api.
Under the covers it might be implemented as twiddling various cpumasks.
We need to be shifting to managing load balancing with cpusets anyway.
Cheers,
Phil
--
On Wed, Sep 02, 2020 at 12:44:42PM +0200 Dietmar Eggemann wrote:
> + Phil Auld
>
Thanks Dietmar.
> On 28/08/2020 19:26, Qais Yousef wrote:
> > On 08/28/20 19:10, Dietmar Eggemann wrote:
> >> On 28/08/2020 12:27, Qais Yousef wrote:
> >>> On 08/28/20 10
On 25/08/2020 21:28, Wolfram Sang wrote:
Hi Phil,
yes, this thread is old but a similar issue came up again...
On Fri, Oct 25, 2019 at 09:14:00AM +0800, Phil Reid wrote:
So at the beginning of a new transfer, we should check if SDA (or SCL?)
is low and, if it's true, only then we s
>> this patch allowing the arm64 DMA zone be configurable.
>>
>> Signed-off-by: Alix Wu
>> Signed-off-by: YJ Chiang
>> Signed-off-by: Phil Chang
>> ---
>> Hi
>>
>> For some devices, the main memory split into 2 part due to the memory
The following commit has been merged into the sched/urgent branch of tip:
Commit-ID: a1bd06853ee478d37fae9435c5521e301de94c67
Gitweb:
https://git.kernel.org/tip/a1bd06853ee478d37fae9435c5521e301de94c67
Author:Phil Auld
AuthorDate:Wed, 05 Aug 2020 16:31:38 -04:00
Committer
The count field is meant to tell if an update to nr_running
is an add or a subtract. Make it do so by adding the missing
minus sign.
Fixes: 9d246053a691 ("sched: Add a tracepoint to track rq->nr_running")
Signed-off-by: Phil Auld
---
kernel/sched/sched.h | 2 +-
1 file changed
this patch allowing the arm64 DMA zone be configurable.
Signed-off-by: Alix Wu
Signed-off-by: YJ Chiang
Signed-off-by: Phil Chang
---
Hi
For some devices, the main memory split into 2 part due to the memory
architecture,
the efficient and less inefficient part.
One of the use case is fine
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 9d246053a69196c7c27068870e9b4b66ac536f68
Gitweb:
https://git.kernel.org/tip/9d246053a69196c7c27068870e9b4b66ac536f68
Author:Phil Auld
AuthorDate:Mon, 29 Jun 2020 15:23:03 -04:00
Committer
re going to wake up a thread waiting for CONDITION we
>* need to ensure that CONDITION=1 done by the caller can not be
> - * reordered with p->state check below. This pairs with mb() in
> - * set_current_state() the waiting thread does.
> + * reordered with p->state
ore-scheduling and just
> use that for tagging. (No need to even have a tag file, just adding/removing
> to/from CGroup will tag).
>
... this could be an interesting approach. Then the cookie could still
be the cgroup address as is and there would be no need for the prctl. At
least so it s
acepoints are added to add_nr_running() and sub_nr_running() which
are in kernel/sched/sched.h. In order to avoid CREATE_TRACE_POINTS in
the header a wrapper call is used and the trace/events/sched.h include
is moved before sched.h in kernel/sched/core.
Signed-off-by: Phil Auld
CC: Qais Yousef
CC
Hi Qais,
On Mon, Jun 22, 2020 at 01:17:47PM +0100 Qais Yousef wrote:
> On 06/19/20 10:11, Phil Auld wrote:
> > Add a bare tracepoint trace_sched_update_nr_running_tp which tracks
> > ->nr_running CPU's rq. This is used to accurately trace this data and
> > provide
On Fri, Jun 19, 2020 at 12:46:41PM -0400 Steven Rostedt wrote:
> On Fri, 19 Jun 2020 10:11:20 -0400
> Phil Auld wrote:
>
> >
> > diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
> > index ed168b0e2c53..a6d9fe5a68cf 100644
> > --- a/inclu
acepoints are added to add_nr_running() and sub_nr_running() which
are in kernel/sched/sched.h. Since sched.h includes trace/events/tlb.h
via mmu_context.h we had to limit when CREATE_TRACE_POINTS is defined.
Signed-off-by: Phil Auld
CC: Qais Yousef
CC: Ingo Molnar
CC: Peter Zijlstra
CC: Vincen
On Tue, Jun 09, 2020 at 07:05:38AM +0800 Tao Zhou wrote:
> Hi Phil,
>
> On Mon, Jun 08, 2020 at 10:53:04AM -0400, Phil Auld wrote:
> > On Sun, Jun 07, 2020 at 09:25:58AM +0800 Tao Zhou wrote:
> > > Hi,
> > >
> > > On Fri, May 01, 2020 at 06:
> > don't start a distribution while one is already running. However, even
> > in the event that this race occurs, it is fine to have two distributions
> > running (especially now that distribute grabs the cfs_b->lock to
> > determine remaining quota before assigning).
> &g
On Thu, May 28, 2020 at 02:17:19PM -0400 Phil Auld wrote:
> On Thu, May 28, 2020 at 07:01:28PM +0200 Peter Zijlstra wrote:
> > On Sun, May 24, 2020 at 10:00:46AM -0400, Phil Auld wrote:
> > > On Fri, May 22, 2020 at 05:35:24PM -0400 Joel Fernandes wrote:
> > > > On F
On Thu, May 28, 2020 at 07:01:28PM +0200 Peter Zijlstra wrote:
> On Sun, May 24, 2020 at 10:00:46AM -0400, Phil Auld wrote:
> > On Fri, May 22, 2020 at 05:35:24PM -0400 Joel Fernandes wrote:
> > > On Fri, May 22, 2020 at 02:59:05PM +0200, Peter Zijlstra wrote:
> > >
ch as well. Nothing would happen to the tagged task as they were added
to the cgroup. They'd keep their explicitly assigned tags and everything
should "just work". There are other reasons to be in a cpu cgroup together
than just the core scheduling tag.
There are a few other edge cases, like if you are in a cgroup, but have
been tagged explicitly with sched_setattr and then get untagged (presumably
by setting 0) do you get the cgroup tag or just stay untagged? I think based
on per-task winning you'd stay untagged. I supposed you could move out and
back in the cgroup to get the tag reapplied (Or maybe the cgroup interface
could just be reused with the same value to re-tag everyone who's untagged).
Cheers,
Phil
> thanks,
>
> - Joel
>
--
The following commit has been merged into the sched/urgent branch of tip:
Commit-ID: b34cb07dde7c2346dec73d053ce926aeaa087303
Gitweb:
https://git.kernel.org/tip/b34cb07dde7c2346dec73d053ce926aeaa087303
Author:Phil Auld
AuthorDate:Tue, 12 May 2020 09:52:22 -04:00
Committer
Hi,
On Wed, May 13, 2020 at 11:20:35PM +0800, Xiubo Li wrote:
> Recently I hit one netfilter issue, it seems the API breaks or something
> else.
Just for the record, this was caused by a misconfigured kernel.
Cheers, Phil
On Wed, May 13, 2020 at 03:25:29PM +0200 Vincent Guittot wrote:
> On Wed, 13 May 2020 at 15:18, Phil Auld wrote:
> >
> > On Wed, May 13, 2020 at 03:15:53PM +0200 Vincent Guittot wrote:
> > > On Wed, 13 May 2020 at 15:13, Phil Auld wrote:
> > > >
> > &g
On Wed, May 13, 2020 at 03:15:53PM +0200 Vincent Guittot wrote:
> On Wed, 13 May 2020 at 15:13, Phil Auld wrote:
> >
> > On Wed, May 13, 2020 at 03:10:28PM +0200 Vincent Guittot wrote:
> > > On Wed, 13 May 2020 at 14:45, Phil Auld wrote:
> > > >
> > >
On Wed, May 13, 2020 at 03:10:28PM +0200 Vincent Guittot wrote:
> On Wed, 13 May 2020 at 14:45, Phil Auld wrote:
> >
> > Hi Vincent,
> >
> > On Wed, May 13, 2020 at 02:33:35PM +0200 Vincent Guittot wrote:
> > > enqueue_task_fair jumps to enqueue_
the same pattern as
> enqueue_task_fair(). This fixes a problem already faced with the latter and
> add an optimization in the last for_each_sched_entity loop.
>
> Reported-by Tao Zhou
> Reviewed-by: Phil Auld
> Signed-off-by: Vincent Guittot
> ---
>
> v2 changes:
> - R
it doesn't jump to the label then se must be NULL for
the loop to terminate. The final loop is a NOP if se is NULL. The check
wasn't protecting that.
Otherwise still
> Reviewed-by: Phil Auld
Cheers,
Phil
> Signed-off-by: Vincent Guittot
> ---
>
> v2 changes:
> -
with this one as well. As expected, since
the first patch fixed the issue I was seeing and I wasn't hitting
the assert here anyway, I didn't hit the assert.
But I also didn't hit any other issues, new or old.
It makes sense to use the same logic flow here as enqueue_task_fair.
Reviewed-by: Phil Auld
Cheers,
Phil
--
uct *p,
> int flags)
>
> }
>
> +enqueue_throttle:
> if (cfs_bandwidth_used()) {
> /*
> * When bandwidth control is enabled; the cfs_rq_throttled()
> --
> 2.17.1
>
Reviewed-by: Phil Auld
--
On Tue, May 12, 2020 at 04:10:48PM +0200 Peter Zijlstra wrote:
> On Tue, May 12, 2020 at 09:52:22AM -0400, Phil Auld wrote:
> > sched/fair: Fix enqueue_task_fair warning some more
> >
> > The recent patch, fe61468b2cb (sched/fair: Fix enqueue_task_fair warning)
> >
add fixes and review tags.
Suggested-by: Vincent Guittot
Signed-off-by: Phil Auld
Cc: Peter Zijlstra (Intel)
Cc: Vincent Guittot
Cc: Ingo Molnar
Cc: Juri Lelli
Reviewed-by: Vincent Guittot
Reviewed-by: Dietmar Eggemann
Fixes: fe61468b2cb (sched/fair: Fix enqueue_task_fair warning)
---
ke
Hi Dietmar,
On Tue, May 12, 2020 at 11:00:16AM +0200 Dietmar Eggemann wrote:
> On 11/05/2020 22:44, Phil Auld wrote:
> > On Mon, May 11, 2020 at 09:25:43PM +0200 Vincent Guittot wrote:
> >> On Thu, 7 May 2020 at 22:36, Phil Auld wrote:
> >>>
> >>> sche
On Mon, May 11, 2020 at 09:25:43PM +0200 Vincent Guittot wrote:
> On Thu, 7 May 2020 at 22:36, Phil Auld wrote:
> >
> > sched/fair: Fix enqueue_task_fair warning some more
> >
> > The recent patch, fe61468b2cb (sched/fair: Fix enqueue_task_fair warning)
> > did
first loop.
Address this by calling leaf_add_rq_list if there are throttled parents while
doing the second for_each_sched_entity loop.
Suggested-by: Vincent Guittot
Signed-off-by: Phil Auld
Cc: Peter Zijlstra (Intel)
Cc: Vincent Guittot
Cc: Ingo Molnar
Cc: Juri Lelli
---
kernel/sched/fair.c | 7
Hi Vincent,
On Thu, May 07, 2020 at 05:06:29PM +0200 Vincent Guittot wrote:
> Hi Phil,
>
> On Wed, 6 May 2020 at 20:05, Phil Auld wrote:
> >
> > Hi Vincent,
> >
> > Thanks for taking a look. More below...
> >
> > On Wed, May 06, 2020 at 06:36:45
g would
effect. At an initial glance I'm thinking it would be the imbalance_min
which is currently hardcoded to 2. But there may be something else...
Cheers,
Phil
> Thanks a lot!
> Jirka
>
> On Thu, May 7, 2020 at 5:54 PM Mel Gorman wrote:
> >
> > On Thu, May 07
Hi Vincent,
On Thu, May 07, 2020 at 05:06:29PM +0200 Vincent Guittot wrote:
> Hi Phil,
>
> On Wed, 6 May 2020 at 20:05, Phil Auld wrote:
> >
> > Hi Vincent,
> >
> > Thanks for taking a look. More below...
> >
> > On Wed, May 06, 2020 at 06:36:45
Hi Vincent,
Thanks for taking a look. More below...
On Wed, May 06, 2020 at 06:36:45PM +0200 Vincent Guittot wrote:
> Hi Phil,
>
> - reply to all this time
>
> On Wed, 6 May 2020 at 16:18, Phil Auld wrote:
> >
> > sched/fair: Fix enqueue_task_fair warning some mo
first loop.
Address this issue by saving the se pointer when the first loop exits and
resetting it before doing the fix up, if needed.
Signed-off-by: Phil Auld
Cc: Peter Zijlstra (Intel)
Cc: Vincent Guittot
Cc: Ingo Molnar
Cc: Juri Lelli
---
kernel/sched/fair.c | 4
1 file changed, 4 insertion
find_idlest_group
> > >
> > > kernel/sched/fair.c | 1181
> > > +--
> > > 1 file changed, 682 insertions(+), 499 deletions(-)
> >
> > Thanks, that's an excellent series!
> >
> > I've queued it up
On Tue, Oct 08, 2019 at 05:53:11PM +0200 Vincent Guittot wrote:
> Hi Phil,
>
...
> While preparing v4, I have noticed that I have probably oversimplified
> the end of find_idlest_group() in patch "sched/fair: optimize
> find_idlest_group" when it compares local
17.32 19.37 23.92 21.08
There is high variance so it may not be anythign specific between v1 and v3
here.
The initial fixes I made for this issue did not exhibit this behavior. They
would have had other issues dealing with overload cases though. In this case
however there are only 154 or
20, cfs_quota_us = 3200)
[ 1393.965140] cfs_period_timer[cpu11]: period too short, but cannot scale up
without losing precision (cfs_period_us = 20, cfs_quota_us = 3200)
I suspect going higher could cause the original lockup, but that'd be the case
with the old code as well.
An
Hi Xuewei,
On Fri, Oct 04, 2019 at 05:28:15PM -0700 Xuewei Zhang wrote:
> On Fri, Oct 4, 2019 at 6:14 AM Phil Auld wrote:
> >
> > On Thu, Oct 03, 2019 at 07:05:56PM -0700 Xuewei Zhang wrote:
> > > +cc neeln...@google.com and hao...@google.com, they helped a lot
>
On Thu, Oct 03, 2019 at 07:05:56PM -0700 Xuewei Zhang wrote:
> +cc neeln...@google.com and hao...@google.com, they helped a lot
> for this issue. Sorry I forgot to include them when sending out the patch.
>
> On Thu, Oct 3, 2019 at 5:55 PM Phil Auld wrote:
> >
> > Hi
max_cfs_quota_period/2 and max_cfs_quota_period that would get us out of
the loop. Possibly in practice it won't matter but here you trigger the
warning and take no action to keep it from continuing.
Also, if you are actually hitting this then you might want to just start at
a higher but propo
ev, "Not using confd gpio");
}
/* Register manager with unique name */
Best regards,
Pavel
--
Regards
Phil Reid
wrong
group in find_busiest_group due to using the average load. The second was in
fix_small_imbalance(). The "load" of the lu.C tasks was so low it often failed
to move anything even when it did find a group that was overloaded (nr_running
> width). I have two small patches which fix this but since Vincent was
> embarking
on a re-work which also addressed this I dropped them.
We've also run a series of performance tests we use to check for regressions
and
did not find any bad results on our workloads and systems.
So...
Tested-by: Phil Auld
Cheers,
Phil
--
On Wed, Aug 28, 2019 at 06:01:14PM +0200 Peter Zijlstra wrote:
> On Wed, Aug 28, 2019 at 11:30:34AM -0400, Phil Auld wrote:
> > On Tue, Aug 27, 2019 at 11:50:35PM +0200 Peter Zijlstra wrote:
>
> > > And given MDS, I'm still not entirely convinced it all makes sense.
error = PAC_RESET_KEYS(me, arg2);
> > break;
> > + case PR_CORE_ISOLATE:
> > +#ifdef CONFIG_SCHED_CORE
> > + current->core_cookie = (unsigned long)current;
>
> This needs to then also force a reschedule of current. And there
urn -EINVAL.
> This peculiarity is documented by commit 5c56dfe63b6e ("clk: Add comment
> about __of_clk_get_by_name() error values").
>
> Let's further document this function so that it's clear what the return
> value is and how to use the arguments to parse
On 26/08/2019 02:07, Jonathan Cameron wrote:
On Wed, 21 Aug 2019 11:12:00 +0200
Michal Simek wrote:
On 21. 08. 19 4:11, Phil Reid wrote:
On 20/08/2019 22:11, Michal Simek wrote:
Add support for using label property for easier device identification via
iio framework.
Signed-off-by: Michal
ure overrun, and
> > WARN on any overruns? We wouldn't expect overruns, but their
> > existence would indicate an over-loaded node or too short of a
> > cfs_period. Additionally, it would be interesting to see if we could
> > capture the offset between when the bandwidth was refilled, and when
> > the timer was supposed to fire. I've always done all my calculations
> > assuming that the timer fires and is handled exceedingly close to the
> > time it was supposed to fire. Although, if the node is running that
> > overloaded you probably have many more problems than worrying about
> > timer warnings.
>
> That "overrun" there is not really an overrun - it's the number of
> complete periods the timer has been inactive for. It was used so that a
> given tg's period timer would keep the same
> phase/offset/whatever-you-call-it, even if it goes idle for a while,
> rather than having the next period start N ms after a task wakes up.
>
> Also, poor choices by userspace is not generally something the kernel
> generally WARNs on, as I understand it.
I don't think it matters in the start_cfs_bandwidth case, anyway. We do
effectively check in sched_cfs_period_timer.
Cleanup looks okay to me as well.
Cheers,
Phil
--
On 19/08/2019 03:32, Jonathan Cameron wrote:
On Mon, 12 Aug 2019 19:08:12 +0800
Phil Reid wrote:
G'day Martin / Jonathan,
On 12/08/2019 18:37, Martin Kaiser wrote:
Hi Jonathan,
Thus wrote Jonathan Cameron (ji...@kernel.org):
The patch is fine, but I'm wondering about wheth
1 - 100 of 735 matches
Mail list logo