Re: [PATCH v2 10/12] sched/core: uclamp: use TG's clamps to restrict Task's clamps

2018-07-26 Thread Suren Baghdasaryan
Sorry for the delay. Overlooked this comment... On Tue, Jul 24, 2018 at 8:49 AM, Patrick Bellasi wrote: > On 24-Jul 08:28, Suren Baghdasaryan wrote: > > Hi Patrick. Thanks for the explanation and links. No more questions > > from me on this one :) > > No problems at a

Re: [PATCH 0/10] psi: pressure stall information for CPU, memory, and IO v2

2018-07-27 Thread Suren Baghdasaryan
On Thu, Jul 26, 2018 at 1:07 PM, Johannes Weiner wrote: > On Thu, Jul 26, 2018 at 11:07:32AM +1000, Singh, Balbir wrote: >> On 7/25/18 1:15 AM, Johannes Weiner wrote: >> > On Tue, Jul 24, 2018 at 07:14:02AM +1000, Balbir Singh wrote: >> >> Does the mechanism scale? I am a little concerned about ho

Re: [PATCH v2 02/12] sched/core: uclamp: map TASK's clamp values into CPU's clamp groups

2018-07-19 Thread Suren Baghdasaryan
On Mon, Jul 16, 2018 at 1:28 AM, Patrick Bellasi wrote: > Utilization clamping requires each CPU to know which clamp values are > assigned to tasks that are currently RUNNABLE on that CPU. > Multiple tasks can be assigned the same clamp value and tasks with > different clamp values can be concurre

Re: [PATCH v2 03/12] sched/core: uclamp: add CPU's clamp groups accounting

2018-07-20 Thread Suren Baghdasaryan
Hi Patrick, On Mon, Jul 16, 2018 at 1:28 AM, Patrick Bellasi wrote: > Utilization clamping allows to clamp the utilization of a CPU within a > [util_min, util_max] range. This range depends on the set of currently > RUNNABLE tasks on a CPU, where each task references two "clamp groups" > defining

Re: [PATCH v2 02/12] sched/core: uclamp: map TASK's clamp values into CPU's clamp groups

2018-07-20 Thread Suren Baghdasaryan
Hi Patrick, On Fri, Jul 20, 2018 at 8:11 AM, Patrick Bellasi wrote: > Hi Suren, > thanks for the review, all good point... some more comments follow > inline. > > On 19-Jul 16:51, Suren Baghdasaryan wrote: >> On Mon, Jul 16, 2018 at 1:28 AM, Patr

Re: [PATCH v2 07/12] sched/core: uclamp: enforce last task UCLAMP_MAX

2018-07-20 Thread Suren Baghdasaryan
Hi Patrick, On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi wrote: > When a util_max clamped task sleeps, its clamp constraints are removed > from the CPU. However, the blocked utilization on that CPU can still be > higher than the max clamp value enforced while that task was running. > This max

Re: [PATCH v2 08/12] sched/core: uclamp: extend cpu's cgroup controller

2018-07-20 Thread Suren Baghdasaryan
On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi wrote: > The cgroup's CPU controller allows to assign a specified (maximum) > bandwidth to the tasks of a group. However this bandwidth is defined and > enforced only on a temporal base, without considering the actual > frequency a CPU is running on

Re: [PATCH v2 08/12] sched/core: uclamp: extend cpu's cgroup controller

2018-07-20 Thread Suren Baghdasaryan
On Fri, Jul 20, 2018 at 7:37 PM, Suren Baghdasaryan wrote: > On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi > wrote: >> The cgroup's CPU controller allows to assign a specified (maximum) >> bandwidth to the tasks of a group. However this bandwidth is defined and >>

Re: [PATCH v2 10/12] sched/core: uclamp: use TG's clamps to restrict Task's clamps

2018-07-21 Thread Suren Baghdasaryan
On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi wrote: > When a task's util_clamp value is configured via sched_setattr(2), this > value has to be properly accounted in the corresponding clamp group > every time the task is enqueued and dequeued. When cgroups are also in > use, per-task clamp val

Re: [PATCH v2 11/12] sched/core: uclamp: update CPU's refcount on TG's clamp changes

2018-07-21 Thread Suren Baghdasaryan
On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi wrote: > When a task group refcounts a new clamp group, we need to ensure that > the new clamp values are immediately enforced to all its tasks which are > currently RUNNABLE. This is to ensure that all currently RUNNABLE task tasks > are boosted

Re: [PATCH v2 12/12] sched/core: uclamp: use percentage clamp values

2018-07-21 Thread Suren Baghdasaryan
On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi wrote: > The utilization is a well defined property of tasks and CPUs with an > in-kernel representation based on power-of-two values. > The current representation, in the [0..SCHED_CAPACITY_SCALE] range, > allows efficient computations in hot-paths

Re: [PATCH v2 07/12] sched/core: uclamp: enforce last task UCLAMP_MAX

2018-07-23 Thread Suren Baghdasaryan
On Mon, Jul 23, 2018 at 8:02 AM, Patrick Bellasi wrote: > On 20-Jul 18:23, Suren Baghdasaryan wrote: >> Hi Patrick, > > Hi Sure, > thank! > >> On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi >> wrote: > > [...] > >> > @@ -977,13 +99

Re: [PATCH v2 10/12] sched/core: uclamp: use TG's clamps to restrict Task's clamps

2018-07-23 Thread Suren Baghdasaryan
On Mon, Jul 23, 2018 at 8:40 AM, Patrick Bellasi wrote: > On 21-Jul 20:05, Suren Baghdasaryan wrote: >> On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi >> wrote: >> > When a task's util_clamp value is configured via sched_setattr(2), this >> > val

Re: [PATCH v2 10/12] sched/core: uclamp: use TG's clamps to restrict Task's clamps

2018-07-24 Thread Suren Baghdasaryan
Hi Patrick. Thanks for the explanation and links. No more questions from me on this one :) On Tue, Jul 24, 2018 at 2:56 AM, Patrick Bellasi wrote: > On 23-Jul 10:11, Suren Baghdasaryan wrote: >> On Mon, Jul 23, 2018 at 8:40 AM, Patrick Bellasi >> wrote: >> > On 21-Jul 2

Re: [PATCH v2 12/12] sched/core: uclamp: use percentage clamp values

2018-07-24 Thread Suren Baghdasaryan
On Tue, Jul 24, 2018 at 9:43 AM, Patrick Bellasi wrote: > On 21-Jul 21:04, Suren Baghdasaryan wrote: >> On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi >> wrote: > > [...] > >> > +static inline unsigned int scale_from_percent(unsigned int pct) >&

Re: [RFC PATCH 10/10] psi: aggregate ongoing stall events when somebody reads pressure

2018-07-13 Thread Suren Baghdasaryan
ressure > metrics are read, the current per-cpu states, if any, are taken into > account as well. > > Any ongoing states are concluded, their time snapshotted, and then > restarted. This requires holding the rq lock to avoid corruption. It > could use some form of rq lock rateli

Re: [RFC PATCH 10/10] psi: aggregate ongoing stall events when somebody reads pressure

2018-07-13 Thread Suren Baghdasaryan
On Fri, Jul 13, 2018 at 3:49 PM, Johannes Weiner wrote: > On Fri, Jul 13, 2018 at 03:13:07PM -0700, Suren Baghdasaryan wrote: >> On Thu, Jul 12, 2018 at 10:29 AM, Johannes Weiner wrote: >> > might want to know about and react to stall states before they have >> > even

Re: [PATCH] dm bufio: fix shrinker scans when (nr_to_scan < retain_target)

2018-01-04 Thread Suren Baghdasaryan
Dear kernel maintainers. I know it was close to holiday season when I send this patch last month, so delay was expected. Could you please take a look at it and provide your feedback? Thanks! On Wed, Dec 6, 2017 at 9:27 AM, Suren Baghdasaryan wrote: > When system is under memory pressure it

Re: [PATCH 0/7] psi: pressure stall information for CPU, memory, and IO

2018-05-25 Thread Suren Baghdasaryan
Hi Johannes, I tried your previous memdelay patches before this new set was posted and results were promising for predicting when Android system is close to OOM. I'm definitely going to try this one after I backport it to 4.9. On Mon, May 7, 2018 at 2:01 PM, Johannes Weiner wrote: > Hi, > > I pre

Re: [PATCH 3/6] psi: eliminate lazy clock mode

2018-12-17 Thread Suren Baghdasaryan
On Mon, Dec 17, 2018 at 6:58 AM Peter Zijlstra wrote: > > On Fri, Dec 14, 2018 at 09:15:05AM -0800, Suren Baghdasaryan wrote: > > Eliminate the idle mode and keep the worker doing 2s update intervals > > at all times. > > That sounds like a bad deal.. esp. so for batt

Re: [PATCH 4/6] psi: introduce state_mask to represent stalled psi states

2018-12-17 Thread Suren Baghdasaryan
On Mon, Dec 17, 2018 at 7:55 AM Peter Zijlstra wrote: > > On Fri, Dec 14, 2018 at 09:15:06AM -0800, Suren Baghdasaryan wrote: > > The psi monitoring patches will need to determine the same states as > > record_times(). To avoid calculating them twice, maintain a state ma

Re: [PATCH 6/6] psi: introduce psi monitor

2018-12-17 Thread Suren Baghdasaryan
On Mon, Dec 17, 2018 at 8:22 AM Peter Zijlstra wrote: > > On Fri, Dec 14, 2018 at 09:15:08AM -0800, Suren Baghdasaryan wrote: > > +ssize_t psi_trigger_parse(char *buf, size_t nbytes, enum psi_res res, > > + enum psi_states *state, u32 *threshold_us, u32 *win_sz_us) > >

Re: [PATCH 6/6] psi: introduce psi monitor

2018-12-17 Thread Suren Baghdasaryan
On Mon, Dec 17, 2018 at 8:37 AM Peter Zijlstra wrote: > > On Fri, Dec 14, 2018 at 09:15:08AM -0800, Suren Baghdasaryan wrote: > > @@ -358,28 +526,23 @@ static void psi_update_work(struct work_struct *work) > > { > > struct delayed_work *dwork; > >

Re: [PATCH 6/6] psi: introduce psi monitor

2018-12-18 Thread Suren Baghdasaryan
2018 at 9:30 AM Johannes Weiner wrote: > > On Tue, Dec 18, 2018 at 11:46:22AM +0100, Peter Zijlstra wrote: > > On Mon, Dec 17, 2018 at 05:21:05PM -0800, Suren Baghdasaryan wrote: > > > On Mon, Dec 17, 2018 at 8:22 AM Peter Zijlstra > > > wrote: > > > > > >

Re: [PATCH 6/6] psi: introduce psi monitor

2018-12-18 Thread Suren Baghdasaryan
On Tue, Dec 18, 2018 at 11:18 AM Joel Fernandes wrote: > > On Tue, Dec 18, 2018 at 9:58 AM 'Suren Baghdasaryan' via kernel-team > wrote: > > > > Current design supports only whole percentages and if userspace needs > > more granularity then it has to us

[PATCH 5/6] psi: rename psi fields in preparation for psi trigger addition

2018-12-14 Thread Suren Baghdasaryan
Renaming psi_group structure member fields used for calculating psi totals and averages for clear distinction between them and trigger-related fields that will be added next. Signed-off-by: Suren Baghdasaryan --- include/linux/psi_types.h | 15 --- kernel/sched/psi.c| 26

[PATCH 2/6] kernel: cgroup: add poll file operation

2018-12-14 Thread Suren Baghdasaryan
have per-fd trigger configurations. Signed-off-by: Johannes Weiner Signed-off-by: Suren Baghdasaryan --- include/linux/cgroup-defs.h | 4 kernel/cgroup/cgroup.c | 12 2 files changed, 16 insertions(+) diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h

[PATCH 1/6] fs: kernfs: add poll file operation

2018-12-14 Thread Suren Baghdasaryan
have per-fd trigger configurations. Signed-off-by: Johannes Weiner Signed-off-by: Suren Baghdasaryan --- fs/kernfs/file.c | 31 --- include/linux/kernfs.h | 6 ++ 2 files changed, 26 insertions(+), 11 deletions(-) diff --git a/fs/kernfs/file.c b/fs/kernfs

[PATCH 3/6] psi: eliminate lazy clock mode

2018-12-14 Thread Suren Baghdasaryan
Signed-off-by: Suren Baghdasaryan --- kernel/sched/psi.c | 55 +++--- 1 file changed, 22 insertions(+), 33 deletions(-) diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c index fe24de3fbc93..d2b9c9a1a62f 100644 --- a/kernel/sched/psi.c +++ b/kernel/s

[PATCH 6/6] psi: introduce psi monitor

2018-12-14 Thread Suren Baghdasaryan
duration of one tracking window to avoid repeated activations/deactivations when psi signal is bouncing. Notifications to the users are rate-limited to one per tracking window. Signed-off-by: Suren Baghdasaryan --- Documentation/accounting/psi.txt | 105 +++ include/linux/psi.h | 10

[PATCH 4/6] psi: introduce state_mask to represent stalled psi states

2018-12-14 Thread Suren Baghdasaryan
The psi monitoring patches will need to determine the same states as record_times(). To avoid calculating them twice, maintain a state mask that can be consulted cheaply. Do this in a separate patch to keep the churn in the main feature patch at a minimum. Signed-off-by: Suren Baghdasaryan

[PATCH 0/6] psi: pressure stall monitors

2018-12-14 Thread Suren Baghdasaryan
ed in collaboration with Johannes Weiner. The patches are based on 4.20-rc6. Johannes Weiner (3): fs: kernfs: add poll file operation kernel: cgroup: add poll file operation psi: eliminate lazy clock mode Suren Baghdasaryan (3): psi: introduce state_mask to represent stalled psi s

Re: [PATCH 0/9] psi: pressure stall information for CPU, memory, and IO v4

2018-09-18 Thread Suren Baghdasaryan
Hi Daniel, On Sun, Sep 16, 2018 at 10:22 PM, Daniel Drake wrote: > Hi Suren > > On Fri, Sep 7, 2018 at 11:58 PM, Suren Baghdasaryan wrote: >> Thanks for the new patchset! Backported to 4.9 and retested on ARMv8 8 >> code system running Android. Signals behave as expected

Re: [PATCH 0/9] psi: pressure stall information for CPU, memory, and IO v4

2018-09-18 Thread Suren Baghdasaryan
using PSI for Android I will try to upstream the backport. If upstream rejects it we will have to merge it into Android common kernel repo as a last resort. Hope this answers your question. > I guess that this patch is to big for the LTS tree. > > On 09/07/2018 05:58 PM, Suren Baghdasaryan wr

Re: [PATCH RFC v3 12/13] mm: add SLAB_TYPESAFE_BY_RCU to files_cache

2024-08-13 Thread Suren Baghdasaryan
On Mon, Aug 12, 2024 at 11:07 PM Mateusz Guzik wrote: > > On Mon, Aug 12, 2024 at 09:29:16PM -0700, Andrii Nakryiko wrote: > > Add RCU protection for file struct's backing memory by adding > > SLAB_TYPESAFE_BY_RCU flag to files_cachep. This will allow to locklessly > > access struct file's fields

Re: [PATCH RFC v3 13/13] uprobes: add speculative lockless VMA to inode resolution

2024-08-13 Thread Suren Baghdasaryan
On Mon, Aug 12, 2024 at 11:18 PM Mateusz Guzik wrote: > > On Mon, Aug 12, 2024 at 09:29:17PM -0700, Andrii Nakryiko wrote: > > Now that files_cachep is SLAB_TYPESAFE_BY_RCU, we can safely access > > vma->vm_file->f_inode lockless only under rcu_read_lock() protection, > > attempting uprobe look up

Re: [PATCH RFC v3 13/13] uprobes: add speculative lockless VMA to inode resolution

2024-08-15 Thread Suren Baghdasaryan
On Thu, Aug 15, 2024 at 9:47 AM Andrii Nakryiko wrote: > > On Thu, Aug 15, 2024 at 6:44 AM Mateusz Guzik wrote: > > > > On Tue, Aug 13, 2024 at 08:36:03AM -0700, Suren Baghdasaryan wrote: > > > On Mon, Aug 12, 2024 at 11:18 PM Mateusz Guzik wrote: > > > >

Re: [PATCH RFC v3 13/13] uprobes: add speculative lockless VMA to inode resolution

2024-08-15 Thread Suren Baghdasaryan
On Thu, Aug 15, 2024 at 11:58 AM Jann Horn wrote: > > +brauner for "struct file" lifetime > > On Thu, Aug 15, 2024 at 7:45 PM Suren Baghdasaryan wrote: > > On Thu, Aug 15, 2024 at 9:47 AM Andrii Nakryiko > > wrote: > > > > > >

Re: [RFC] memory reserve for userspace oom-killer

2021-04-20 Thread Suren Baghdasaryan
Hi Folks, On Tue, Apr 20, 2021 at 12:18 PM Roman Gushchin wrote: > > On Mon, Apr 19, 2021 at 06:44:02PM -0700, Shakeel Butt wrote: > > Proposal: Provide memory guarantees to userspace oom-killer. > > > > Background: > > > > Issues with kernel oom-killer: > > 1. Very conservative and prefer to rec

Re: [PATCH v3 1/1] process_madvise.2: Add process_madvise man page

2021-02-16 Thread Suren Baghdasaryan
Hi Michael, On Sat, Feb 13, 2021 at 2:04 PM Michael Kerrisk (man-pages) wrote: > > Hello Suren, > > On 2/2/21 11:12 PM, Suren Baghdasaryan wrote: > > Hi Michael, > > > > On Tue, Feb 2, 2021 at 2:45 AM Michael Kerrisk (man-pages) > > wrote: > >

Re: [PATCH 0/5] 4.14 backports of fixes for "CoW after fork() issue"

2021-04-07 Thread Suren Baghdasaryan
On Wed, Apr 7, 2021 at 9:07 AM Linus Torvalds wrote: > > On Wed, Apr 7, 2021 at 6:22 AM Vlastimil Babka wrote: > > > > 1) Ignore the issue (outside of Android at least). The security model of > > zygote > > is unusual. Where else a parent of fork() doesn't trust the child, which is > > the > >

Re: [PATCH 0/5] 4.14 backports of fixes for "CoW after fork() issue"

2021-04-07 Thread Suren Baghdasaryan
On Wed, Apr 7, 2021 at 12:23 PM Linus Torvalds wrote: > > On Wed, Apr 7, 2021 at 11:47 AM Mikulas Patocka wrote: > > > > So, we fixed it, but we don't know why. > > > > Peter Xu's patchset that fixed it is here: > > https://lore.kernel.org/lkml/20200821234958.7896-1-pet...@redhat.com/ > > Yeah, t

Re: [PATCH v8 03/16] sched/core: uclamp: Enforce last task's UCLAMP_MAX

2019-04-17 Thread Suren Baghdasaryan
Hi Patrick, On Tue, Apr 2, 2019 at 3:42 AM Patrick Bellasi wrote: > > When a task sleeps it removes its max utilization clamp from its CPU. > However, the blocked utilization on that CPU can be higher than the max > clamp value enforced while the task was running. This allows undesired > CPU fre

Re: [PATCH v8 06/16] sched/core: uclamp: Extend sched_setattr() to support utilization clamping

2019-04-17 Thread Suren Baghdasaryan
On Tue, Apr 2, 2019 at 3:42 AM Patrick Bellasi wrote: > > The SCHED_DEADLINE scheduling class provides an advanced and formal > model to define tasks requirements that can translate into proper > decisions for both task placements and frequencies selections. Other > classes have a more simplified

Re: [PATCH v8 08/16] sched/core: uclamp: Set default clamps for RT tasks

2019-04-17 Thread Suren Baghdasaryan
On Tue, Apr 2, 2019 at 3:42 AM Patrick Bellasi wrote: > > By default FAIR tasks start without clamps, i.e. neither boosted nor > capped, and they run at the best frequency matching their utilization > demand. This default behavior does not fit RT tasks which instead are > expected to run at the m

Re: [PATCH v8 12/16] sched/core: uclamp: Extend CPU's cgroup controller

2019-04-17 Thread Suren Baghdasaryan
On Tue, Apr 2, 2019 at 3:43 AM Patrick Bellasi wrote: > > The cgroup CPU bandwidth controller allows to assign a specified > (maximum) bandwidth to the tasks of a group. However this bandwidth is > defined and enforced only on a temporal base, without considering the > actual frequency a CPU is ru

Re: [PATCH v8 04/16] sched/core: uclamp: Add system default clamps

2019-04-17 Thread Suren Baghdasaryan
On Tue, Apr 2, 2019 at 3:42 AM Patrick Bellasi wrote: > > Tasks without a user-defined clamp value are considered not clamped > and by default their utilization can have any value in the > [0..SCHED_CAPACITY_SCALE] range. > > Tasks with a user-defined clamp value are allowed to request any value >

Re: [RFC 2/2] signal: extend pidfd_send_signal() to allow expedited process killing

2019-04-25 Thread Suren Baghdasaryan
On Fri, Apr 12, 2019 at 7:14 AM Daniel Colascione wrote: > > On Thu, Apr 11, 2019 at 11:53 PM Michal Hocko wrote: > > > > On Thu 11-04-19 08:33:13, Matthew Wilcox wrote: > > > On Wed, Apr 10, 2019 at 06:43:53PM -0700, Suren Baghdasaryan wrote: > > > > A

Re: [RFC 1/2] mm: oom: expose expedite_reclaim to use oom_reaper outside of oom_kill.c

2019-04-25 Thread Suren Baghdasaryan
On Thu, Apr 25, 2019 at 2:13 PM Tetsuo Handa wrote: > > On 2019/04/11 10:43, Suren Baghdasaryan wrote: > > diff --git a/mm/oom_kill.c b/mm/oom_kill.c > > index 3a2484884cfd..6449710c8a06 100644 > > --- a/mm/oom_kill.c > > +++ b/mm/oom_kill.c > > @@ -1102,6 +

Re: [RFC] simple_lmk: Introduce Simple Low Memory Killer for Android

2019-03-12 Thread Suren Baghdasaryan
On Tue, Mar 12, 2019 at 1:05 AM Michal Hocko wrote: > > On Mon 11-03-19 15:15:35, Suren Baghdasaryan wrote: > > On Mon, Mar 11, 2019 at 1:46 PM Sultan Alsawaf > > wrote: > > > > > > On Mon, Mar 11, 2019 at 01:10:36PM -0700, Suren Baghdasaryan wrote: > >

Re: [RFC] simple_lmk: Introduce Simple Low Memory Killer for Android

2019-03-12 Thread Suren Baghdasaryan
On Tue, Mar 12, 2019 at 9:58 AM Michal Hocko wrote: > > On Tue 12-03-19 09:37:41, Sultan Alsawaf wrote: > > I have not had a chance to look at PSI yet, but > > unless a PSI-enabled solution allows allocations to reach the same point as > > when > > the OOM killer is invoked (which is contradictor

Re: [PATCH v7 01/15] sched/core: uclamp: Add CPU's clamp buckets refcounting

2019-03-13 Thread Suren Baghdasaryan
On Wed, Mar 13, 2019 at 8:15 AM Patrick Bellasi wrote: > > On 12-Mar 13:52, Dietmar Eggemann wrote: > > On 2/8/19 11:05 AM, Patrick Bellasi wrote: > > > > [...] > > > > > +config UCLAMP_BUCKETS_COUNT > > > + int "Number of supported utilization clamp buckets" > > > + range 5 20 > > > + defau

Re: [PATCH v7 01/15] sched/core: uclamp: Add CPU's clamp buckets refcounting

2019-03-13 Thread Suren Baghdasaryan
On Wed, Mar 13, 2019 at 12:46 PM Peter Zijlstra wrote: > > On Wed, Mar 13, 2019 at 03:23:59PM +, Patrick Bellasi wrote: > > On 13-Mar 15:09, Peter Zijlstra wrote: > > > On Fri, Feb 08, 2019 at 10:05:40AM +, Patrick Bellasi wrote: > > > > > +static inline void uclamp_rq_update(struct rq *rq

Re: [PATCH v7 01/15] sched/core: uclamp: Add CPU's clamp buckets refcounting

2019-03-13 Thread Suren Baghdasaryan
On Wed, Mar 13, 2019 at 6:52 AM Peter Zijlstra wrote: > > On Fri, Feb 08, 2019 at 10:05:40AM +, Patrick Bellasi wrote: > > +/* > > + * When a task is enqueued on a rq, the clamp bucket currently defined by > > the > > + * task's uclamp::bucket_id is reference counted on that rq. This also > >

Re: [PATCH v7 01/15] sched/core: uclamp: Add CPU's clamp buckets refcounting

2019-03-13 Thread Suren Baghdasaryan
On Fri, Feb 8, 2019 at 2:06 AM Patrick Bellasi wrote: > > Utilization clamping allows to clamp the CPU's utilization within a > [util_min, util_max] range, depending on the set of RUNNABLE tasks on > that CPU. Each task references two "clamp buckets" defining its minimum > and maximum (util_{min,m

Re: [PATCH v7 02/15] sched/core: uclamp: Enforce last task UCLAMP_MAX

2019-03-13 Thread Suren Baghdasaryan
On Wed, Mar 13, 2019 at 9:16 AM Patrick Bellasi wrote: > > On 13-Mar 15:12, Peter Zijlstra wrote: > > On Fri, Feb 08, 2019 at 10:05:41AM +, Patrick Bellasi wrote: > > > +static inline void uclamp_idle_reset(struct rq *rq, unsigned int > > > clamp_id, > > > +uns

Re: [PATCH v7 01/15] sched/core: uclamp: Add CPU's clamp buckets refcounting

2019-03-14 Thread Suren Baghdasaryan
On Thu, Mar 14, 2019 at 7:46 AM Patrick Bellasi wrote: > > On 13-Mar 14:32, Suren Baghdasaryan wrote: > > On Fri, Feb 8, 2019 at 2:06 AM Patrick Bellasi > > wrote: > > > > > > Utilization clamping allows to clamp the CPU's utilization within a > >

Re: [PATCH v7 12/15] sched/core: uclamp: Propagate parent clamps

2019-03-14 Thread Suren Baghdasaryan
On Fri, Feb 8, 2019 at 2:06 AM Patrick Bellasi wrote: > > In order to properly support hierarchical resources control, the cgroup > delegation model requires that attribute writes from a child group never > fail but still are (potentially) constrained based on parent's assigned > resources. This r

Re: [PATCH v7 01/15] sched/core: uclamp: Add CPU's clamp buckets refcounting

2019-03-14 Thread Suren Baghdasaryan
On Thu, Mar 14, 2019 at 8:41 AM Patrick Bellasi wrote: > > On 14-Mar 08:29, Suren Baghdasaryan wrote: > > On Thu, Mar 14, 2019 at 7:46 AM Patrick Bellasi > > wrote: > > > On 13-Mar 14:32, Suren Baghdasaryan wrote: > > > > On Fri, Feb 8, 2019 at

Re: [RFC] simple_lmk: Introduce Simple Low Memory Killer for Android

2019-03-15 Thread Suren Baghdasaryan
On Thu, Mar 14, 2019 at 9:37 PM Daniel Colascione wrote: > > On Thu, Mar 14, 2019 at 8:16 PM Steven Rostedt wrote: > > > > On Thu, 14 Mar 2019 13:49:11 -0700 > > Sultan Alsawaf wrote: > > > > > Perhaps I'm missing something, but if you want to know when a process has > > > died > > > after send

[PATCH v4 1/1] psi: introduce psi monitor

2019-02-05 Thread Suren Baghdasaryan
/deactivations when psi signal is bouncing. Notifications to the users are rate-limited to one per tracking window. Signed-off-by: Suren Baghdasaryan Signed-off-by: Johannes Weiner --- This is respin of: https://lwn.net/ml/linux-kernel/20190124211518.244221-1-surenb%40google.com/ First 4

Re: [PATCH] psi: fix aggregation idle shut-off

2019-02-05 Thread Suren Baghdasaryan
Hi Andrew, On Mon, Jan 28, 2019 at 3:06 PM Andrew Morton wrote: > > On Wed, 16 Jan 2019 14:35:01 -0500 Johannes Weiner wrote: > > > psi has provisions to shut off the periodic aggregation worker when > > there is a period of no task activity - and thus no data that needs > > aggregating. However

[RFC 2/2] signal: extend pidfd_send_signal() to allow expedited process killing

2019-04-10 Thread Suren Baghdasaryan
Add new SS_EXPEDITE flag to be used when sending SIGKILL via pidfd_send_signal() syscall to allow expedited memory reclaim of the victim process. The usage of this flag is currently limited to SIGKILL signal and only to privileged users. Signed-off-by: Suren Baghdasaryan --- include/linux/sched

[RFC 1/2] mm: oom: expose expedite_reclaim to use oom_reaper outside of oom_kill.c

2019-04-10 Thread Suren Baghdasaryan
Create an API to allow users outside of oom_kill.c to mark a victim and wake up oom_reaper thread for expedited memory reclaim of the process being killed. Signed-off-by: Suren Baghdasaryan --- include/linux/oom.h | 1 + mm/oom_kill.c | 15 +++ 2 files changed, 16 insertions

[RFC 0/2] opportunistic memory reclaim of a killed process

2019-04-10 Thread Suren Baghdasaryan
/sec = min reclaim speed: 856 MB/sec3236 MB/sec The patches are based on 5.1-rc1 Suren Baghdasaryan (2): mm: oom: expose expedite_reclaim to use oom_reaper outside of oom_kill.c signal: extend pidfd_send_signal

Re: [RFC 2/2] signal: extend pidfd_send_signal() to allow expedited process killing

2019-04-11 Thread Suren Baghdasaryan
2019 at 06:43:53PM -0700, Suren Baghdasaryan wrote: > > Add new SS_EXPEDITE flag to be used when sending SIGKILL via > > pidfd_send_signal() syscall to allow expedited memory reclaim of the > > victim process. The usage of this flag is currently limited to SIGKILL > >

Re: [RFC 2/2] signal: extend pidfd_send_signal() to allow expedited process killing

2019-04-11 Thread Suren Baghdasaryan
On Thu, Apr 11, 2019 at 8:18 AM Suren Baghdasaryan wrote: > > Thanks for the feedback! > Just to be clear, this implementation is used in this RFC as a > reference to explain the intent. To be honest I don't think it will be > adopted as is even if the idea survives scrutiny

Re: [RFC][PATCH v6 1/7] drm: Add a sharable drm page-pool implementation

2021-02-10 Thread Suren Baghdasaryan
On Wed, Feb 10, 2021 at 5:06 AM Daniel Vetter wrote: > > On Tue, Feb 09, 2021 at 12:16:51PM -0800, Suren Baghdasaryan wrote: > > On Tue, Feb 9, 2021 at 12:03 PM Daniel Vetter wrote: > > > > > > On Tue, Feb 9, 2021 at 6:46 PM Christian König > > > wr

Re: [PATCH] dma-buf: system_heap: do not warn for costly allocation

2021-02-10 Thread Suren Baghdasaryan
The code looks fine to me. Description needs a bit polishing :) On Wed, Feb 10, 2021 at 8:26 AM Minchan Kim wrote: > > Linux VM is not hard to support PAGE_ALLOC_COSTLY_ODER allocation > so normally expects driver passes __GFP_NOWARN in that case > if they has fallback options. > > system_heap in

Re: [RFC][PATCH v6 1/7] drm: Add a sharable drm page-pool implementation

2021-02-10 Thread Suren Baghdasaryan
On Wed, Feb 10, 2021 at 9:21 AM Daniel Vetter wrote: > > On Wed, Feb 10, 2021 at 5:39 PM Suren Baghdasaryan wrote: > > > > On Wed, Feb 10, 2021 at 5:06 AM Daniel Vetter wrote: > > > > > > On Tue, Feb 09, 2021 at 12:16:51PM -0800, Suren Baghdasaryan wrote: >

Re: [RFC][PATCH v6 1/7] drm: Add a sharable drm page-pool implementation

2021-02-10 Thread Suren Baghdasaryan
On Wed, Feb 10, 2021 at 10:32 AM Christian König wrote: > > > > Am 10.02.21 um 17:39 schrieb Suren Baghdasaryan: > > On Wed, Feb 10, 2021 at 5:06 AM Daniel Vetter wrote: > >> On Tue, Feb 09, 2021 at 12:16:51PM -0800, Suren Baghdasaryan wrote: > >>> On Tue

Re: [PATCH v3 1/1] process_madvise.2: Add process_madvise man page

2021-02-18 Thread Suren Baghdasaryan
On Wed, Feb 17, 2021 at 11:55 PM Michael Kerrisk (man-pages) wrote: > > Hello Suren, > > >> Thanks. I added a few words to clarify this.> > > Any link where I can see the final version? > > Sure: > https://git.kernel.org/pub/scm/docs/man-pages/man-pages.git/tree/man2/process_madvise.2 > > Also ren

Re: [PATCH 1/1] dma-buf: heaps: Map system heap pages as managed by linux vm

2021-02-02 Thread Suren Baghdasaryan
On Tue, Feb 2, 2021 at 12:51 AM Christoph Hellwig wrote: > > On Tue, Feb 02, 2021 at 12:44:44AM -0800, Suren Baghdasaryan wrote: > > On Mon, Feb 1, 2021 at 11:03 PM Christoph Hellwig > > wrote: > > > > > > IMHO the > > > > > >

Re: [PATCH v3 1/1] process_madvise.2: Add process_madvise man page

2021-02-02 Thread Suren Baghdasaryan
cumented pieces in *madvise(2)*, > as well as one other question. See below. > > On 2/2/21 6:30 AM, Suren Baghdasaryan wrote: > > Initial version of process_madvise(2) manual page. Initial text was > > extracted from [1], amended after fix [2] and more details added using > > ma

[PATCH 1/2] mm: replace BUG_ON in vm_insert_page with a return of an error

2021-02-02 Thread Suren Baghdasaryan
tifying drivers that need to clear VM_PFNMAP before using dmabuf system heap which is moving to use vm_insert_page. Suggested-by: Christoph Hellwig Signed-off-by: Suren Baghdasaryan --- mm/memory.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/memory.c b/mm/memory.

[PATCH v2 2/2] dma-buf: heaps: Map system heap pages as managed by linux vm

2021-02-02 Thread Suren Baghdasaryan
erdev-devel.linuxdriverproject.narkive.com/v0fJGpaD/using-ion-memory-for-direct-io [2] http://driverdev.linuxdriverproject.org/pipermail/driverdev-devel/2018-October/127519.html (sorry, could not find lore links for these discussions) Suggested-by: Laura Abbott Signed-off-by: Suren Baghdasaryan --- v1 post

Re: [PATCH 1/2] mm: replace BUG_ON in vm_insert_page with a return of an error

2021-02-02 Thread Suren Baghdasaryan
On Tue, Feb 2, 2021 at 5:31 PM Minchan Kim wrote: > > On Tue, Feb 02, 2021 at 04:31:33PM -0800, Suren Baghdasaryan wrote: > > Replace BUG_ON(vma->vm_flags & VM_PFNMAP) in vm_insert_page with > > WARN_ON_ONCE and returning an error. This is to ensure users of the &g

Re: [PATCH v2 2/2] dma-buf: heaps: Map system heap pages as managed by linux vm

2021-02-02 Thread Suren Baghdasaryan
On Tue, Feb 2, 2021 at 5:39 PM Minchan Kim wrote: > > On Tue, Feb 02, 2021 at 04:31:34PM -0800, Suren Baghdasaryan wrote: > > Currently system heap maps its buffers with VM_PFNMAP flag using > > remap_pfn_range. This results in such buffers not being accounted > > for in

Re: [PATCH v2 2/2] dma-buf: heaps: Map system heap pages as managed by linux vm

2021-02-02 Thread Suren Baghdasaryan
On Tue, Feb 2, 2021 at 6:07 PM John Stultz wrote: > > On Tue, Feb 2, 2021 at 4:31 PM Suren Baghdasaryan wrote: > > Currently system heap maps its buffers with VM_PFNMAP flag using > > remap_pfn_range. This results in such buffers not being accounted > > for in PSS calcul

Re: [PATCH 1/2] mm: replace BUG_ON in vm_insert_page with a return of an error

2021-02-02 Thread Suren Baghdasaryan
On Tue, Feb 2, 2021 at 5:55 PM Matthew Wilcox wrote: > > On Tue, Feb 02, 2021 at 04:31:33PM -0800, Suren Baghdasaryan wrote: > > Replace BUG_ON(vma->vm_flags & VM_PFNMAP) in vm_insert_page with > > WARN_ON_ONCE and returning an error. This is to ensure users of the &g

Re: [PATCH v2 2/2] dma-buf: heaps: Map system heap pages as managed by linux vm

2021-02-03 Thread Suren Baghdasaryan
On Wed, Feb 3, 2021 at 12:06 AM Christian König wrote: > > Am 03.02.21 um 03:02 schrieb Suren Baghdasaryan: > > On Tue, Feb 2, 2021 at 5:39 PM Minchan Kim wrote: > >> On Tue, Feb 02, 2021 at 04:31:34PM -0800, Suren Baghdasaryan wrote: > >>> Currently system hea

Re: [Linaro-mm-sig] [PATCH 1/2] mm: replace BUG_ON in vm_insert_page with a return of an error

2021-02-03 Thread Suren Baghdasaryan
On Wed, Feb 3, 2021 at 12:52 AM Daniel Vetter wrote: > > On Wed, Feb 3, 2021 at 2:57 AM Matthew Wilcox wrote: > > > > On Tue, Feb 02, 2021 at 04:31:33PM -0800, Suren Baghdasaryan wrote: > > > Replace BUG_ON(vma->vm_flags & VM_PFNMAP) in vm_insert_page with &

Re: [Linaro-mm-sig] [PATCH 1/2] mm: replace BUG_ON in vm_insert_page with a return of an error

2021-02-03 Thread Suren Baghdasaryan
On Wed, Feb 3, 2021 at 1:25 PM Daniel Vetter wrote: > > On Wed, Feb 3, 2021 at 9:29 PM Daniel Vetter wrote: > > > > On Wed, Feb 3, 2021 at 9:20 PM Suren Baghdasaryan wrote: > > > > > > On Wed, Feb 3, 2021 at 12:52 AM Daniel Vetter > > > wrote: &

Re: [PATCH 1/1] dma-buf: heaps: Map system heap pages as managed by linux vm

2021-01-28 Thread Suren Baghdasaryan
On Thu, Jan 28, 2021 at 1:13 AM Christoph Hellwig wrote: > > On Thu, Jan 28, 2021 at 12:38:17AM -0800, Suren Baghdasaryan wrote: > > Currently system heap maps its buffers with VM_PFNMAP flag using > > remap_pfn_range. This results in such buffers not being accounted > >

Re: [PATCH v2 1/1] mm/madvise: replace ptrace attach requirement for process_madvise

2021-01-28 Thread Suren Baghdasaryan
On Tue, Jan 26, 2021 at 5:52 AM 'Michal Hocko' via kernel-team wrote: > > On Wed 20-01-21 14:17:39, Jann Horn wrote: > > On Wed, Jan 13, 2021 at 3:22 PM Michal Hocko wrote: > > > On Tue 12-01-21 09:51:24, Suren Baghdasaryan wrote: > > > > On Tue, J

Re: [PATCH 1/1] process_madvise.2: Add process_madvise man page

2021-01-28 Thread Suren Baghdasaryan
On Thu, Jan 28, 2021 at 12:31 PM Michael Kerrisk (man-pages) wrote: > > Hello Suren, > > On 1/28/21 7:40 PM, Suren Baghdasaryan wrote: > > On Thu, Jan 28, 2021 at 4:24 AM Michael Kerrisk (man-pages) > > wrote: > >> > >> Hello Suren, > >> > &

[PATCH v2 1/1] process_madvise.2: Add process_madvise man page

2021-01-28 Thread Suren Baghdasaryan
/patchwork/patch/1297933/ [2] https://lkml.org/lkml/2020/12/8/1282 [3] https://patchwork.kernel.org/project/selinux/patch/2021070622.2613577-1-sur...@google.com/#23888311 Signed-off-by: Suren Baghdasaryan --- changes in v2: - Changed description of MADV_COLD per Michal Hocko's sugge

Re: [PATCH v2 1/1] mm/madvise: replace ptrace attach requirement for process_madvise

2021-01-28 Thread Suren Baghdasaryan
On Thu, Jan 28, 2021 at 11:51 AM Suren Baghdasaryan wrote: > > On Tue, Jan 26, 2021 at 5:52 AM 'Michal Hocko' via kernel-team > wrote: > > > > On Wed 20-01-21 14:17:39, Jann Horn wrote: > > > On Wed, Jan 13, 2021 at 3:22 PM Michal Hocko wrote:

Re: [PATCH 1/1] process_madvise.2: Add process_madvise man page

2021-01-28 Thread Suren Baghdasaryan
On Thu, Jan 28, 2021 at 12:31 PM Michael Kerrisk (man-pages) wrote: > > Hello Suren, > > On 1/28/21 7:40 PM, Suren Baghdasaryan wrote: > > On Thu, Jan 28, 2021 at 4:24 AM Michael Kerrisk (man-pages) > > wrote: > >> > >> Hello Suren, > >> > &

Re: [PATCH v2 1/1] process_madvise.2: Add process_madvise man page

2021-01-29 Thread Suren Baghdasaryan
On Fri, Jan 29, 2021 at 1:13 AM 'Michal Hocko' via kernel-team wrote: > > On Thu 28-01-21 23:03:40, Suren Baghdasaryan wrote: > > Initial version of process_madvise(2) manual page. Initial text was > > extracted from [1], amended after fix [2] and more details added us

Re: [PATCH 1/1] dma-buf: heaps: Map system heap pages as managed by linux vm

2021-02-01 Thread Suren Baghdasaryan
On Thu, Jan 28, 2021 at 11:00 AM Suren Baghdasaryan wrote: > > On Thu, Jan 28, 2021 at 10:19 AM Minchan Kim wrote: > > > > On Thu, Jan 28, 2021 at 09:52:59AM -0800, Suren Baghdasaryan wrote: > > > On Thu, Jan 28, 2021 at 1:13 AM Christoph Hellwig > > > w

Re: [PATCH v2 1/1] process_madvise.2: Add process_madvise man page

2021-02-01 Thread Suren Baghdasaryan
; > Again, thanks for the rendered version. As before, I've added my > comments to the page source. Hi Michael, Thanks for reviewing! > > On 1/29/21 8:03 AM, Suren Baghdasaryan wrote: > > Initial version of process_madvise(2) manual page. Initial text was > > extracte

[PATCH v3 1/1] process_madvise.2: Add process_madvise man page

2021-02-01 Thread Suren Baghdasaryan
/patchwork/patch/1297933/ [2] https://lkml.org/lkml/2020/12/8/1282 [3] https://patchwork.kernel.org/project/selinux/patch/2021070622.2613577-1-sur...@google.com/#23888311 Signed-off-by: Suren Baghdasaryan Reviewed-by: Michal Hocko --- changes in v2: - Changed description of MADV_COLD per

Re: [PATCH v2 1/1] mm/madvise: replace ptrace attach requirement for process_madvise

2021-02-01 Thread Suren Baghdasaryan
On Thu, Jan 28, 2021 at 11:08 PM Suren Baghdasaryan wrote: > > On Thu, Jan 28, 2021 at 11:51 AM Suren Baghdasaryan wrote: > > > > On Tue, Jan 26, 2021 at 5:52 AM 'Michal Hocko' via kernel-team > > wrote: > > > > > > On Wed 20-01-21 14:17:39, J

Re: [PATCH 1/1] dma-buf: heaps: Map system heap pages as managed by linux vm

2021-02-02 Thread Suren Baghdasaryan
On Mon, Feb 1, 2021 at 11:03 PM Christoph Hellwig wrote: > > IMHO the > > BUG_ON(vma->vm_flags & VM_PFNMAP); > > in vm_insert_page should just become a WARN_ON_ONCE with an error > return, and then we just need to gradually fix up the callers that > trigger it instead of coming up with wor

Re: [PATCH] mm: cma: support sysfs

2021-02-04 Thread Suren Baghdasaryan
On Thu, Feb 4, 2021 at 3:14 PM John Hubbard wrote: > > On 2/4/21 12:07 PM, Minchan Kim wrote: > > On Thu, Feb 04, 2021 at 12:50:58AM -0800, John Hubbard wrote: > >> On 2/3/21 7:50 AM, Minchan Kim wrote: > >>> Since CMA is getting used more widely, it's more important to > >>> keep monitoring CMA s

Re: [PATCH] mm: cma: support sysfs

2021-02-04 Thread Suren Baghdasaryan
On Thu, Feb 4, 2021 at 3:43 PM Suren Baghdasaryan wrote: > > On Thu, Feb 4, 2021 at 3:14 PM John Hubbard wrote: > > > > On 2/4/21 12:07 PM, Minchan Kim wrote: > > > On Thu, Feb 04, 2021 at 12:50:58AM -0800, John Hubbard wrote: > > >> On 2/3/21 7:50 AM,

Re: [PATCH] mm: cma: support sysfs

2021-02-04 Thread Suren Baghdasaryan
On Thu, Feb 4, 2021 at 4:34 PM John Hubbard wrote: > > On 2/4/21 4:25 PM, John Hubbard wrote: > > On 2/4/21 3:45 PM, Suren Baghdasaryan wrote: > > ... > >>>>>> 2) The overall CMA allocation attempts/failures (first two items > >>>>>>

Re: [PATCH] mm: cma: support sysfs

2021-02-04 Thread Suren Baghdasaryan
On Thu, Feb 4, 2021 at 5:44 PM Minchan Kim wrote: > > On Thu, Feb 04, 2021 at 04:24:20PM -0800, John Hubbard wrote: > > On 2/4/21 4:12 PM, Minchan Kim wrote: > > ... > > > > > Then, how to know how often CMA API failed? > > > > > > > > Why would you even need to know that, *in addition* to knowing

Re: [Linaro-mm-sig] [PATCH 1/2] mm: replace BUG_ON in vm_insert_page with a return of an error

2021-02-04 Thread Suren Baghdasaryan
On Thu, Feb 4, 2021 at 7:55 AM Alex Deucher wrote: > > On Thu, Feb 4, 2021 at 3:16 AM Christian König > wrote: > > > > Am 03.02.21 um 22:41 schrieb Suren Baghdasaryan: > > > [SNIP] > > >>> How many semi-unrelated buffer accounting schemes does google

  1   2   3   4   >