Sorry for the delay. Overlooked this comment...
On Tue, Jul 24, 2018 at 8:49 AM, Patrick Bellasi
wrote:
> On 24-Jul 08:28, Suren Baghdasaryan wrote:
> > Hi Patrick. Thanks for the explanation and links. No more questions
> > from me on this one :)
>
> No problems at a
On Thu, Jul 26, 2018 at 1:07 PM, Johannes Weiner wrote:
> On Thu, Jul 26, 2018 at 11:07:32AM +1000, Singh, Balbir wrote:
>> On 7/25/18 1:15 AM, Johannes Weiner wrote:
>> > On Tue, Jul 24, 2018 at 07:14:02AM +1000, Balbir Singh wrote:
>> >> Does the mechanism scale? I am a little concerned about ho
On Mon, Jul 16, 2018 at 1:28 AM, Patrick Bellasi
wrote:
> Utilization clamping requires each CPU to know which clamp values are
> assigned to tasks that are currently RUNNABLE on that CPU.
> Multiple tasks can be assigned the same clamp value and tasks with
> different clamp values can be concurre
Hi Patrick,
On Mon, Jul 16, 2018 at 1:28 AM, Patrick Bellasi
wrote:
> Utilization clamping allows to clamp the utilization of a CPU within a
> [util_min, util_max] range. This range depends on the set of currently
> RUNNABLE tasks on a CPU, where each task references two "clamp groups"
> defining
Hi Patrick,
On Fri, Jul 20, 2018 at 8:11 AM, Patrick Bellasi
wrote:
> Hi Suren,
> thanks for the review, all good point... some more comments follow
> inline.
>
> On 19-Jul 16:51, Suren Baghdasaryan wrote:
>> On Mon, Jul 16, 2018 at 1:28 AM, Patr
Hi Patrick,
On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi
wrote:
> When a util_max clamped task sleeps, its clamp constraints are removed
> from the CPU. However, the blocked utilization on that CPU can still be
> higher than the max clamp value enforced while that task was running.
> This max
On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi
wrote:
> The cgroup's CPU controller allows to assign a specified (maximum)
> bandwidth to the tasks of a group. However this bandwidth is defined and
> enforced only on a temporal base, without considering the actual
> frequency a CPU is running on
On Fri, Jul 20, 2018 at 7:37 PM, Suren Baghdasaryan wrote:
> On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi
> wrote:
>> The cgroup's CPU controller allows to assign a specified (maximum)
>> bandwidth to the tasks of a group. However this bandwidth is defined and
>>
On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi
wrote:
> When a task's util_clamp value is configured via sched_setattr(2), this
> value has to be properly accounted in the corresponding clamp group
> every time the task is enqueued and dequeued. When cgroups are also in
> use, per-task clamp val
On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi
wrote:
> When a task group refcounts a new clamp group, we need to ensure that
> the new clamp values are immediately enforced to all its tasks which are
> currently RUNNABLE. This is to ensure that all currently RUNNABLE task
tasks
> are boosted
On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi
wrote:
> The utilization is a well defined property of tasks and CPUs with an
> in-kernel representation based on power-of-two values.
> The current representation, in the [0..SCHED_CAPACITY_SCALE] range,
> allows efficient computations in hot-paths
On Mon, Jul 23, 2018 at 8:02 AM, Patrick Bellasi
wrote:
> On 20-Jul 18:23, Suren Baghdasaryan wrote:
>> Hi Patrick,
>
> Hi Sure,
> thank!
>
>> On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi
>> wrote:
>
> [...]
>
>> > @@ -977,13 +99
On Mon, Jul 23, 2018 at 8:40 AM, Patrick Bellasi
wrote:
> On 21-Jul 20:05, Suren Baghdasaryan wrote:
>> On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi
>> wrote:
>> > When a task's util_clamp value is configured via sched_setattr(2), this
>> > val
Hi Patrick. Thanks for the explanation and links. No more questions
from me on this one :)
On Tue, Jul 24, 2018 at 2:56 AM, Patrick Bellasi
wrote:
> On 23-Jul 10:11, Suren Baghdasaryan wrote:
>> On Mon, Jul 23, 2018 at 8:40 AM, Patrick Bellasi
>> wrote:
>> > On 21-Jul 2
On Tue, Jul 24, 2018 at 9:43 AM, Patrick Bellasi
wrote:
> On 21-Jul 21:04, Suren Baghdasaryan wrote:
>> On Mon, Jul 16, 2018 at 1:29 AM, Patrick Bellasi
>> wrote:
>
> [...]
>
>> > +static inline unsigned int scale_from_percent(unsigned int pct)
>&
ressure
> metrics are read, the current per-cpu states, if any, are taken into
> account as well.
>
> Any ongoing states are concluded, their time snapshotted, and then
> restarted. This requires holding the rq lock to avoid corruption. It
> could use some form of rq lock rateli
On Fri, Jul 13, 2018 at 3:49 PM, Johannes Weiner wrote:
> On Fri, Jul 13, 2018 at 03:13:07PM -0700, Suren Baghdasaryan wrote:
>> On Thu, Jul 12, 2018 at 10:29 AM, Johannes Weiner wrote:
>> > might want to know about and react to stall states before they have
>> > even
Dear kernel maintainers. I know it was close to holiday season when I
send this patch last month, so delay was expected. Could you please
take a look at it and provide your feedback?
Thanks!
On Wed, Dec 6, 2017 at 9:27 AM, Suren Baghdasaryan wrote:
> When system is under memory pressure it
Hi Johannes,
I tried your previous memdelay patches before this new set was posted
and results were promising for predicting when Android system is close
to OOM. I'm definitely going to try this one after I backport it to
4.9.
On Mon, May 7, 2018 at 2:01 PM, Johannes Weiner wrote:
> Hi,
>
> I pre
On Mon, Dec 17, 2018 at 6:58 AM Peter Zijlstra wrote:
>
> On Fri, Dec 14, 2018 at 09:15:05AM -0800, Suren Baghdasaryan wrote:
> > Eliminate the idle mode and keep the worker doing 2s update intervals
> > at all times.
>
> That sounds like a bad deal.. esp. so for batt
On Mon, Dec 17, 2018 at 7:55 AM Peter Zijlstra wrote:
>
> On Fri, Dec 14, 2018 at 09:15:06AM -0800, Suren Baghdasaryan wrote:
> > The psi monitoring patches will need to determine the same states as
> > record_times(). To avoid calculating them twice, maintain a state ma
On Mon, Dec 17, 2018 at 8:22 AM Peter Zijlstra wrote:
>
> On Fri, Dec 14, 2018 at 09:15:08AM -0800, Suren Baghdasaryan wrote:
> > +ssize_t psi_trigger_parse(char *buf, size_t nbytes, enum psi_res res,
> > + enum psi_states *state, u32 *threshold_us, u32 *win_sz_us)
> >
On Mon, Dec 17, 2018 at 8:37 AM Peter Zijlstra wrote:
>
> On Fri, Dec 14, 2018 at 09:15:08AM -0800, Suren Baghdasaryan wrote:
> > @@ -358,28 +526,23 @@ static void psi_update_work(struct work_struct *work)
> > {
> > struct delayed_work *dwork;
> >
2018 at 9:30 AM Johannes Weiner wrote:
>
> On Tue, Dec 18, 2018 at 11:46:22AM +0100, Peter Zijlstra wrote:
> > On Mon, Dec 17, 2018 at 05:21:05PM -0800, Suren Baghdasaryan wrote:
> > > On Mon, Dec 17, 2018 at 8:22 AM Peter Zijlstra
> > > wrote:
> >
> > > >
On Tue, Dec 18, 2018 at 11:18 AM Joel Fernandes wrote:
>
> On Tue, Dec 18, 2018 at 9:58 AM 'Suren Baghdasaryan' via kernel-team
> wrote:
> >
> > Current design supports only whole percentages and if userspace needs
> > more granularity then it has to us
Renaming psi_group structure member fields used for calculating psi
totals and averages for clear distinction between them and trigger-related
fields that will be added next.
Signed-off-by: Suren Baghdasaryan
---
include/linux/psi_types.h | 15 ---
kernel/sched/psi.c| 26
have
per-fd trigger configurations.
Signed-off-by: Johannes Weiner
Signed-off-by: Suren Baghdasaryan
---
include/linux/cgroup-defs.h | 4
kernel/cgroup/cgroup.c | 12
2 files changed, 16 insertions(+)
diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
have
per-fd trigger configurations.
Signed-off-by: Johannes Weiner
Signed-off-by: Suren Baghdasaryan
---
fs/kernfs/file.c | 31 ---
include/linux/kernfs.h | 6 ++
2 files changed, 26 insertions(+), 11 deletions(-)
diff --git a/fs/kernfs/file.c b/fs/kernfs
Signed-off-by: Suren Baghdasaryan
---
kernel/sched/psi.c | 55 +++---
1 file changed, 22 insertions(+), 33 deletions(-)
diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
index fe24de3fbc93..d2b9c9a1a62f 100644
--- a/kernel/sched/psi.c
+++ b/kernel/s
duration of one
tracking window to avoid repeated activations/deactivations when psi
signal is bouncing.
Notifications to the users are rate-limited to one per tracking window.
Signed-off-by: Suren Baghdasaryan
---
Documentation/accounting/psi.txt | 105 +++
include/linux/psi.h | 10
The psi monitoring patches will need to determine the same states as
record_times(). To avoid calculating them twice, maintain a state mask
that can be consulted cheaply. Do this in a separate patch to keep the
churn in the main feature patch at a minimum.
Signed-off-by: Suren Baghdasaryan
ed in collaboration with Johannes Weiner.
The patches are based on 4.20-rc6.
Johannes Weiner (3):
fs: kernfs: add poll file operation
kernel: cgroup: add poll file operation
psi: eliminate lazy clock mode
Suren Baghdasaryan (3):
psi: introduce state_mask to represent stalled psi s
Hi Daniel,
On Sun, Sep 16, 2018 at 10:22 PM, Daniel Drake wrote:
> Hi Suren
>
> On Fri, Sep 7, 2018 at 11:58 PM, Suren Baghdasaryan wrote:
>> Thanks for the new patchset! Backported to 4.9 and retested on ARMv8 8
>> code system running Android. Signals behave as expected
using PSI for Android I will try to upstream the backport. If
upstream rejects it we will have to merge it into Android common
kernel repo as a last resort. Hope this answers your question.
> I guess that this patch is to big for the LTS tree.
>
> On 09/07/2018 05:58 PM, Suren Baghdasaryan wr
On Mon, Aug 12, 2024 at 11:07 PM Mateusz Guzik wrote:
>
> On Mon, Aug 12, 2024 at 09:29:16PM -0700, Andrii Nakryiko wrote:
> > Add RCU protection for file struct's backing memory by adding
> > SLAB_TYPESAFE_BY_RCU flag to files_cachep. This will allow to locklessly
> > access struct file's fields
On Mon, Aug 12, 2024 at 11:18 PM Mateusz Guzik wrote:
>
> On Mon, Aug 12, 2024 at 09:29:17PM -0700, Andrii Nakryiko wrote:
> > Now that files_cachep is SLAB_TYPESAFE_BY_RCU, we can safely access
> > vma->vm_file->f_inode lockless only under rcu_read_lock() protection,
> > attempting uprobe look up
On Thu, Aug 15, 2024 at 9:47 AM Andrii Nakryiko
wrote:
>
> On Thu, Aug 15, 2024 at 6:44 AM Mateusz Guzik wrote:
> >
> > On Tue, Aug 13, 2024 at 08:36:03AM -0700, Suren Baghdasaryan wrote:
> > > On Mon, Aug 12, 2024 at 11:18 PM Mateusz Guzik wrote:
> > > >
On Thu, Aug 15, 2024 at 11:58 AM Jann Horn wrote:
>
> +brauner for "struct file" lifetime
>
> On Thu, Aug 15, 2024 at 7:45 PM Suren Baghdasaryan wrote:
> > On Thu, Aug 15, 2024 at 9:47 AM Andrii Nakryiko
> > wrote:
> > >
> > >
Hi Folks,
On Tue, Apr 20, 2021 at 12:18 PM Roman Gushchin wrote:
>
> On Mon, Apr 19, 2021 at 06:44:02PM -0700, Shakeel Butt wrote:
> > Proposal: Provide memory guarantees to userspace oom-killer.
> >
> > Background:
> >
> > Issues with kernel oom-killer:
> > 1. Very conservative and prefer to rec
Hi Michael,
On Sat, Feb 13, 2021 at 2:04 PM Michael Kerrisk (man-pages)
wrote:
>
> Hello Suren,
>
> On 2/2/21 11:12 PM, Suren Baghdasaryan wrote:
> > Hi Michael,
> >
> > On Tue, Feb 2, 2021 at 2:45 AM Michael Kerrisk (man-pages)
> > wrote:
> >
On Wed, Apr 7, 2021 at 9:07 AM Linus Torvalds
wrote:
>
> On Wed, Apr 7, 2021 at 6:22 AM Vlastimil Babka wrote:
> >
> > 1) Ignore the issue (outside of Android at least). The security model of
> > zygote
> > is unusual. Where else a parent of fork() doesn't trust the child, which is
> > the
> >
On Wed, Apr 7, 2021 at 12:23 PM Linus Torvalds
wrote:
>
> On Wed, Apr 7, 2021 at 11:47 AM Mikulas Patocka wrote:
> >
> > So, we fixed it, but we don't know why.
> >
> > Peter Xu's patchset that fixed it is here:
> > https://lore.kernel.org/lkml/20200821234958.7896-1-pet...@redhat.com/
>
> Yeah, t
Hi Patrick,
On Tue, Apr 2, 2019 at 3:42 AM Patrick Bellasi wrote:
>
> When a task sleeps it removes its max utilization clamp from its CPU.
> However, the blocked utilization on that CPU can be higher than the max
> clamp value enforced while the task was running. This allows undesired
> CPU fre
On Tue, Apr 2, 2019 at 3:42 AM Patrick Bellasi wrote:
>
> The SCHED_DEADLINE scheduling class provides an advanced and formal
> model to define tasks requirements that can translate into proper
> decisions for both task placements and frequencies selections. Other
> classes have a more simplified
On Tue, Apr 2, 2019 at 3:42 AM Patrick Bellasi wrote:
>
> By default FAIR tasks start without clamps, i.e. neither boosted nor
> capped, and they run at the best frequency matching their utilization
> demand. This default behavior does not fit RT tasks which instead are
> expected to run at the m
On Tue, Apr 2, 2019 at 3:43 AM Patrick Bellasi wrote:
>
> The cgroup CPU bandwidth controller allows to assign a specified
> (maximum) bandwidth to the tasks of a group. However this bandwidth is
> defined and enforced only on a temporal base, without considering the
> actual frequency a CPU is ru
On Tue, Apr 2, 2019 at 3:42 AM Patrick Bellasi wrote:
>
> Tasks without a user-defined clamp value are considered not clamped
> and by default their utilization can have any value in the
> [0..SCHED_CAPACITY_SCALE] range.
>
> Tasks with a user-defined clamp value are allowed to request any value
>
On Fri, Apr 12, 2019 at 7:14 AM Daniel Colascione wrote:
>
> On Thu, Apr 11, 2019 at 11:53 PM Michal Hocko wrote:
> >
> > On Thu 11-04-19 08:33:13, Matthew Wilcox wrote:
> > > On Wed, Apr 10, 2019 at 06:43:53PM -0700, Suren Baghdasaryan wrote:
> > > > A
On Thu, Apr 25, 2019 at 2:13 PM Tetsuo Handa
wrote:
>
> On 2019/04/11 10:43, Suren Baghdasaryan wrote:
> > diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> > index 3a2484884cfd..6449710c8a06 100644
> > --- a/mm/oom_kill.c
> > +++ b/mm/oom_kill.c
> > @@ -1102,6 +
On Tue, Mar 12, 2019 at 1:05 AM Michal Hocko wrote:
>
> On Mon 11-03-19 15:15:35, Suren Baghdasaryan wrote:
> > On Mon, Mar 11, 2019 at 1:46 PM Sultan Alsawaf
> > wrote:
> > >
> > > On Mon, Mar 11, 2019 at 01:10:36PM -0700, Suren Baghdasaryan wrote:
> >
On Tue, Mar 12, 2019 at 9:58 AM Michal Hocko wrote:
>
> On Tue 12-03-19 09:37:41, Sultan Alsawaf wrote:
> > I have not had a chance to look at PSI yet, but
> > unless a PSI-enabled solution allows allocations to reach the same point as
> > when
> > the OOM killer is invoked (which is contradictor
On Wed, Mar 13, 2019 at 8:15 AM Patrick Bellasi wrote:
>
> On 12-Mar 13:52, Dietmar Eggemann wrote:
> > On 2/8/19 11:05 AM, Patrick Bellasi wrote:
> >
> > [...]
> >
> > > +config UCLAMP_BUCKETS_COUNT
> > > + int "Number of supported utilization clamp buckets"
> > > + range 5 20
> > > + defau
On Wed, Mar 13, 2019 at 12:46 PM Peter Zijlstra wrote:
>
> On Wed, Mar 13, 2019 at 03:23:59PM +, Patrick Bellasi wrote:
> > On 13-Mar 15:09, Peter Zijlstra wrote:
> > > On Fri, Feb 08, 2019 at 10:05:40AM +, Patrick Bellasi wrote:
>
> > > > +static inline void uclamp_rq_update(struct rq *rq
On Wed, Mar 13, 2019 at 6:52 AM Peter Zijlstra wrote:
>
> On Fri, Feb 08, 2019 at 10:05:40AM +, Patrick Bellasi wrote:
> > +/*
> > + * When a task is enqueued on a rq, the clamp bucket currently defined by
> > the
> > + * task's uclamp::bucket_id is reference counted on that rq. This also
> >
On Fri, Feb 8, 2019 at 2:06 AM Patrick Bellasi wrote:
>
> Utilization clamping allows to clamp the CPU's utilization within a
> [util_min, util_max] range, depending on the set of RUNNABLE tasks on
> that CPU. Each task references two "clamp buckets" defining its minimum
> and maximum (util_{min,m
On Wed, Mar 13, 2019 at 9:16 AM Patrick Bellasi wrote:
>
> On 13-Mar 15:12, Peter Zijlstra wrote:
> > On Fri, Feb 08, 2019 at 10:05:41AM +, Patrick Bellasi wrote:
> > > +static inline void uclamp_idle_reset(struct rq *rq, unsigned int
> > > clamp_id,
> > > +uns
On Thu, Mar 14, 2019 at 7:46 AM Patrick Bellasi wrote:
>
> On 13-Mar 14:32, Suren Baghdasaryan wrote:
> > On Fri, Feb 8, 2019 at 2:06 AM Patrick Bellasi
> > wrote:
> > >
> > > Utilization clamping allows to clamp the CPU's utilization within a
> >
On Fri, Feb 8, 2019 at 2:06 AM Patrick Bellasi wrote:
>
> In order to properly support hierarchical resources control, the cgroup
> delegation model requires that attribute writes from a child group never
> fail but still are (potentially) constrained based on parent's assigned
> resources. This r
On Thu, Mar 14, 2019 at 8:41 AM Patrick Bellasi wrote:
>
> On 14-Mar 08:29, Suren Baghdasaryan wrote:
> > On Thu, Mar 14, 2019 at 7:46 AM Patrick Bellasi
> > wrote:
> > > On 13-Mar 14:32, Suren Baghdasaryan wrote:
> > > > On Fri, Feb 8, 2019 at
On Thu, Mar 14, 2019 at 9:37 PM Daniel Colascione wrote:
>
> On Thu, Mar 14, 2019 at 8:16 PM Steven Rostedt wrote:
> >
> > On Thu, 14 Mar 2019 13:49:11 -0700
> > Sultan Alsawaf wrote:
> >
> > > Perhaps I'm missing something, but if you want to know when a process has
> > > died
> > > after send
/deactivations when psi
signal is bouncing.
Notifications to the users are rate-limited to one per tracking window.
Signed-off-by: Suren Baghdasaryan
Signed-off-by: Johannes Weiner
---
This is respin of:
https://lwn.net/ml/linux-kernel/20190124211518.244221-1-surenb%40google.com/
First 4
Hi Andrew,
On Mon, Jan 28, 2019 at 3:06 PM Andrew Morton wrote:
>
> On Wed, 16 Jan 2019 14:35:01 -0500 Johannes Weiner wrote:
>
> > psi has provisions to shut off the periodic aggregation worker when
> > there is a period of no task activity - and thus no data that needs
> > aggregating. However
Add new SS_EXPEDITE flag to be used when sending SIGKILL via
pidfd_send_signal() syscall to allow expedited memory reclaim of the
victim process. The usage of this flag is currently limited to SIGKILL
signal and only to privileged users.
Signed-off-by: Suren Baghdasaryan
---
include/linux/sched
Create an API to allow users outside of oom_kill.c to mark a victim and
wake up oom_reaper thread for expedited memory reclaim of the process being
killed.
Signed-off-by: Suren Baghdasaryan
---
include/linux/oom.h | 1 +
mm/oom_kill.c | 15 +++
2 files changed, 16 insertions
/sec
=
min reclaim speed: 856 MB/sec3236 MB/sec
The patches are based on 5.1-rc1
Suren Baghdasaryan (2):
mm: oom: expose expedite_reclaim to use oom_reaper outside of
oom_kill.c
signal: extend pidfd_send_signal
2019 at 06:43:53PM -0700, Suren Baghdasaryan wrote:
> > Add new SS_EXPEDITE flag to be used when sending SIGKILL via
> > pidfd_send_signal() syscall to allow expedited memory reclaim of the
> > victim process. The usage of this flag is currently limited to SIGKILL
> >
On Thu, Apr 11, 2019 at 8:18 AM Suren Baghdasaryan wrote:
>
> Thanks for the feedback!
> Just to be clear, this implementation is used in this RFC as a
> reference to explain the intent. To be honest I don't think it will be
> adopted as is even if the idea survives scrutiny
On Wed, Feb 10, 2021 at 5:06 AM Daniel Vetter wrote:
>
> On Tue, Feb 09, 2021 at 12:16:51PM -0800, Suren Baghdasaryan wrote:
> > On Tue, Feb 9, 2021 at 12:03 PM Daniel Vetter wrote:
> > >
> > > On Tue, Feb 9, 2021 at 6:46 PM Christian König
> > > wr
The code looks fine to me. Description needs a bit polishing :)
On Wed, Feb 10, 2021 at 8:26 AM Minchan Kim wrote:
>
> Linux VM is not hard to support PAGE_ALLOC_COSTLY_ODER allocation
> so normally expects driver passes __GFP_NOWARN in that case
> if they has fallback options.
>
> system_heap in
On Wed, Feb 10, 2021 at 9:21 AM Daniel Vetter wrote:
>
> On Wed, Feb 10, 2021 at 5:39 PM Suren Baghdasaryan wrote:
> >
> > On Wed, Feb 10, 2021 at 5:06 AM Daniel Vetter wrote:
> > >
> > > On Tue, Feb 09, 2021 at 12:16:51PM -0800, Suren Baghdasaryan wrote:
>
On Wed, Feb 10, 2021 at 10:32 AM Christian König
wrote:
>
>
>
> Am 10.02.21 um 17:39 schrieb Suren Baghdasaryan:
> > On Wed, Feb 10, 2021 at 5:06 AM Daniel Vetter wrote:
> >> On Tue, Feb 09, 2021 at 12:16:51PM -0800, Suren Baghdasaryan wrote:
> >>> On Tue
On Wed, Feb 17, 2021 at 11:55 PM Michael Kerrisk (man-pages)
wrote:
>
> Hello Suren,
>
> >> Thanks. I added a few words to clarify this.>
> > Any link where I can see the final version?
>
> Sure:
> https://git.kernel.org/pub/scm/docs/man-pages/man-pages.git/tree/man2/process_madvise.2
>
> Also ren
On Tue, Feb 2, 2021 at 12:51 AM Christoph Hellwig wrote:
>
> On Tue, Feb 02, 2021 at 12:44:44AM -0800, Suren Baghdasaryan wrote:
> > On Mon, Feb 1, 2021 at 11:03 PM Christoph Hellwig
> > wrote:
> > >
> > > IMHO the
> > >
> > >
cumented pieces in *madvise(2)*,
> as well as one other question. See below.
>
> On 2/2/21 6:30 AM, Suren Baghdasaryan wrote:
> > Initial version of process_madvise(2) manual page. Initial text was
> > extracted from [1], amended after fix [2] and more details added using
> > ma
tifying drivers that need to clear VM_PFNMAP before
using dmabuf system heap which is moving to use vm_insert_page.
Suggested-by: Christoph Hellwig
Signed-off-by: Suren Baghdasaryan
---
mm/memory.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/memory.c b/mm/memory.
erdev-devel.linuxdriverproject.narkive.com/v0fJGpaD/using-ion-memory-for-direct-io
[2]
http://driverdev.linuxdriverproject.org/pipermail/driverdev-devel/2018-October/127519.html
(sorry, could not find lore links for these discussions)
Suggested-by: Laura Abbott
Signed-off-by: Suren Baghdasaryan
---
v1 post
On Tue, Feb 2, 2021 at 5:31 PM Minchan Kim wrote:
>
> On Tue, Feb 02, 2021 at 04:31:33PM -0800, Suren Baghdasaryan wrote:
> > Replace BUG_ON(vma->vm_flags & VM_PFNMAP) in vm_insert_page with
> > WARN_ON_ONCE and returning an error. This is to ensure users of the
&g
On Tue, Feb 2, 2021 at 5:39 PM Minchan Kim wrote:
>
> On Tue, Feb 02, 2021 at 04:31:34PM -0800, Suren Baghdasaryan wrote:
> > Currently system heap maps its buffers with VM_PFNMAP flag using
> > remap_pfn_range. This results in such buffers not being accounted
> > for in
On Tue, Feb 2, 2021 at 6:07 PM John Stultz wrote:
>
> On Tue, Feb 2, 2021 at 4:31 PM Suren Baghdasaryan wrote:
> > Currently system heap maps its buffers with VM_PFNMAP flag using
> > remap_pfn_range. This results in such buffers not being accounted
> > for in PSS calcul
On Tue, Feb 2, 2021 at 5:55 PM Matthew Wilcox wrote:
>
> On Tue, Feb 02, 2021 at 04:31:33PM -0800, Suren Baghdasaryan wrote:
> > Replace BUG_ON(vma->vm_flags & VM_PFNMAP) in vm_insert_page with
> > WARN_ON_ONCE and returning an error. This is to ensure users of the
&g
On Wed, Feb 3, 2021 at 12:06 AM Christian König
wrote:
>
> Am 03.02.21 um 03:02 schrieb Suren Baghdasaryan:
> > On Tue, Feb 2, 2021 at 5:39 PM Minchan Kim wrote:
> >> On Tue, Feb 02, 2021 at 04:31:34PM -0800, Suren Baghdasaryan wrote:
> >>> Currently system hea
On Wed, Feb 3, 2021 at 12:52 AM Daniel Vetter wrote:
>
> On Wed, Feb 3, 2021 at 2:57 AM Matthew Wilcox wrote:
> >
> > On Tue, Feb 02, 2021 at 04:31:33PM -0800, Suren Baghdasaryan wrote:
> > > Replace BUG_ON(vma->vm_flags & VM_PFNMAP) in vm_insert_page with
&
On Wed, Feb 3, 2021 at 1:25 PM Daniel Vetter wrote:
>
> On Wed, Feb 3, 2021 at 9:29 PM Daniel Vetter wrote:
> >
> > On Wed, Feb 3, 2021 at 9:20 PM Suren Baghdasaryan wrote:
> > >
> > > On Wed, Feb 3, 2021 at 12:52 AM Daniel Vetter
> > > wrote:
&
On Thu, Jan 28, 2021 at 1:13 AM Christoph Hellwig wrote:
>
> On Thu, Jan 28, 2021 at 12:38:17AM -0800, Suren Baghdasaryan wrote:
> > Currently system heap maps its buffers with VM_PFNMAP flag using
> > remap_pfn_range. This results in such buffers not being accounted
> >
On Tue, Jan 26, 2021 at 5:52 AM 'Michal Hocko' via kernel-team
wrote:
>
> On Wed 20-01-21 14:17:39, Jann Horn wrote:
> > On Wed, Jan 13, 2021 at 3:22 PM Michal Hocko wrote:
> > > On Tue 12-01-21 09:51:24, Suren Baghdasaryan wrote:
> > > > On Tue, J
On Thu, Jan 28, 2021 at 12:31 PM Michael Kerrisk (man-pages)
wrote:
>
> Hello Suren,
>
> On 1/28/21 7:40 PM, Suren Baghdasaryan wrote:
> > On Thu, Jan 28, 2021 at 4:24 AM Michael Kerrisk (man-pages)
> > wrote:
> >>
> >> Hello Suren,
> >>
> &
/patchwork/patch/1297933/
[2] https://lkml.org/lkml/2020/12/8/1282
[3]
https://patchwork.kernel.org/project/selinux/patch/2021070622.2613577-1-sur...@google.com/#23888311
Signed-off-by: Suren Baghdasaryan
---
changes in v2:
- Changed description of MADV_COLD per Michal Hocko's sugge
On Thu, Jan 28, 2021 at 11:51 AM Suren Baghdasaryan wrote:
>
> On Tue, Jan 26, 2021 at 5:52 AM 'Michal Hocko' via kernel-team
> wrote:
> >
> > On Wed 20-01-21 14:17:39, Jann Horn wrote:
> > > On Wed, Jan 13, 2021 at 3:22 PM Michal Hocko wrote:
On Thu, Jan 28, 2021 at 12:31 PM Michael Kerrisk (man-pages)
wrote:
>
> Hello Suren,
>
> On 1/28/21 7:40 PM, Suren Baghdasaryan wrote:
> > On Thu, Jan 28, 2021 at 4:24 AM Michael Kerrisk (man-pages)
> > wrote:
> >>
> >> Hello Suren,
> >>
> &
On Fri, Jan 29, 2021 at 1:13 AM 'Michal Hocko' via kernel-team
wrote:
>
> On Thu 28-01-21 23:03:40, Suren Baghdasaryan wrote:
> > Initial version of process_madvise(2) manual page. Initial text was
> > extracted from [1], amended after fix [2] and more details added us
On Thu, Jan 28, 2021 at 11:00 AM Suren Baghdasaryan wrote:
>
> On Thu, Jan 28, 2021 at 10:19 AM Minchan Kim wrote:
> >
> > On Thu, Jan 28, 2021 at 09:52:59AM -0800, Suren Baghdasaryan wrote:
> > > On Thu, Jan 28, 2021 at 1:13 AM Christoph Hellwig
> > > w
;
> Again, thanks for the rendered version. As before, I've added my
> comments to the page source.
Hi Michael,
Thanks for reviewing!
>
> On 1/29/21 8:03 AM, Suren Baghdasaryan wrote:
> > Initial version of process_madvise(2) manual page. Initial text was
> > extracte
/patchwork/patch/1297933/
[2] https://lkml.org/lkml/2020/12/8/1282
[3]
https://patchwork.kernel.org/project/selinux/patch/2021070622.2613577-1-sur...@google.com/#23888311
Signed-off-by: Suren Baghdasaryan
Reviewed-by: Michal Hocko
---
changes in v2:
- Changed description of MADV_COLD per
On Thu, Jan 28, 2021 at 11:08 PM Suren Baghdasaryan wrote:
>
> On Thu, Jan 28, 2021 at 11:51 AM Suren Baghdasaryan wrote:
> >
> > On Tue, Jan 26, 2021 at 5:52 AM 'Michal Hocko' via kernel-team
> > wrote:
> > >
> > > On Wed 20-01-21 14:17:39, J
On Mon, Feb 1, 2021 at 11:03 PM Christoph Hellwig wrote:
>
> IMHO the
>
> BUG_ON(vma->vm_flags & VM_PFNMAP);
>
> in vm_insert_page should just become a WARN_ON_ONCE with an error
> return, and then we just need to gradually fix up the callers that
> trigger it instead of coming up with wor
On Thu, Feb 4, 2021 at 3:14 PM John Hubbard wrote:
>
> On 2/4/21 12:07 PM, Minchan Kim wrote:
> > On Thu, Feb 04, 2021 at 12:50:58AM -0800, John Hubbard wrote:
> >> On 2/3/21 7:50 AM, Minchan Kim wrote:
> >>> Since CMA is getting used more widely, it's more important to
> >>> keep monitoring CMA s
On Thu, Feb 4, 2021 at 3:43 PM Suren Baghdasaryan wrote:
>
> On Thu, Feb 4, 2021 at 3:14 PM John Hubbard wrote:
> >
> > On 2/4/21 12:07 PM, Minchan Kim wrote:
> > > On Thu, Feb 04, 2021 at 12:50:58AM -0800, John Hubbard wrote:
> > >> On 2/3/21 7:50 AM,
On Thu, Feb 4, 2021 at 4:34 PM John Hubbard wrote:
>
> On 2/4/21 4:25 PM, John Hubbard wrote:
> > On 2/4/21 3:45 PM, Suren Baghdasaryan wrote:
> > ...
> >>>>>> 2) The overall CMA allocation attempts/failures (first two items
> >>>>>>
On Thu, Feb 4, 2021 at 5:44 PM Minchan Kim wrote:
>
> On Thu, Feb 04, 2021 at 04:24:20PM -0800, John Hubbard wrote:
> > On 2/4/21 4:12 PM, Minchan Kim wrote:
> > ...
> > > > > Then, how to know how often CMA API failed?
> > > >
> > > > Why would you even need to know that, *in addition* to knowing
On Thu, Feb 4, 2021 at 7:55 AM Alex Deucher wrote:
>
> On Thu, Feb 4, 2021 at 3:16 AM Christian König
> wrote:
> >
> > Am 03.02.21 um 22:41 schrieb Suren Baghdasaryan:
> > > [SNIP]
> > >>> How many semi-unrelated buffer accounting schemes does google
1 - 100 of 335 matches
Mail list logo