Re: [PATCH v3 2/5] alloc_tag: load module tags into separate contiguous memory

2024-10-15 Thread Shakeel Butt
On Mon, Oct 14, 2024 at 07:10:56PM GMT, Suren Baghdasaryan wrote: > On Mon, Oct 14, 2024 at 4:51 PM Andrew Morton > wrote: > > > > On Mon, 14 Oct 2024 13:36:43 -0700 Suren Baghdasaryan > > wrote: > > > > > When a module gets unloaded there is a possibility that some of the > > > allocations it

Re: [RFC PATCH] mm: memcontrol: memory+swap accounting for cgroup-v2

2017-12-27 Thread Shakeel Butt
memory) irrespective of the presence of swap". On Thu, Dec 21, 2017 at 9:29 AM, Tejun Heo wrote: > Hello, Shakeel. > > On Thu, Dec 21, 2017 at 07:22:20AM -0800, Shakeel Butt wrote: >> I am claiming memory allocations under global pressure will be >> affected by the perf

Re: [RFC PATCH] mm: memcontrol: memory+swap accounting for cgroup-v2

2017-12-21 Thread Shakeel Butt
> The swap (and its performance) is and should be transparent > to the job owners. Please ignore this statement, I didn't mean to claim on the independence of job performance and underlying swap performance, sorry about that. I meant to say that the amount of anon memory a job can allocate should

Re: [RFC PATCH] mm: memcontrol: memory+swap accounting for cgroup-v2

2017-12-21 Thread Shakeel Butt
On Thu, Dec 21, 2017 at 5:37 AM, Tejun Heo wrote: > Hello, Shakeel. > > On Wed, Dec 20, 2017 at 05:15:41PM -0800, Shakeel Butt wrote: >> Let's say we have a job that allocates 100 MiB memory and suppose 80 >> MiB is anon and 20 MiB is non-anon (file & kmem). >>

Re: [RFC PATCH] mm: memcontrol: memory+swap accounting for cgroup-v2

2017-12-20 Thread Shakeel Butt
On Wed, Dec 20, 2017 at 3:36 PM, Tejun Heo wrote: > Hello, Shakeel. > > On Wed, Dec 20, 2017 at 12:15:46PM -0800, Shakeel Butt wrote: >> > I don't understand how this invariant is useful across different >> > backing swap devices and availability. e.g. Our OOM

Re: [RFC PATCH] mm: memcontrol: memory+swap accounting for cgroup-v2

2017-12-20 Thread Shakeel Butt
On Wed, Dec 20, 2017 at 12:15 PM, Shakeel Butt wrote: > On Wed, Dec 20, 2017 at 11:37 AM, Tejun Heo wrote: >> Hello, Shakeel. >> >> On Tue, Dec 19, 2017 at 02:39:19PM -0800, Shakeel Butt wrote: >>> Suppose a user wants to run multiple instances of a specific job on

Re: [RFC PATCH] mm: memcontrol: memory+swap accounting for cgroup-v2

2017-12-20 Thread Shakeel Butt
On Wed, Dec 20, 2017 at 11:37 AM, Tejun Heo wrote: > Hello, Shakeel. > > On Tue, Dec 19, 2017 at 02:39:19PM -0800, Shakeel Butt wrote: >> Suppose a user wants to run multiple instances of a specific job on >> different datacenters and s/he has budget of 100MiB for each insta

Re: [RFC PATCH] mm: memcontrol: memory+swap accounting for cgroup-v2

2017-12-19 Thread Shakeel Butt
On Tue, Dec 19, 2017 at 1:41 PM, Tejun Heo wrote: > Hello, > > On Tue, Dec 19, 2017 at 10:25:12AM -0800, Shakeel Butt wrote: >> Making the runtime environment, an invariant is very critical to make >> the management of a job easier whose instances run on different >>

Re: [RFC PATCH] mm: memcontrol: memory+swap accounting for cgroup-v2

2017-12-19 Thread Shakeel Butt
On Tue, Dec 19, 2017 at 9:33 AM, Tejun Heo wrote: > Hello, > > On Tue, Dec 19, 2017 at 09:23:29AM -0800, Shakeel Butt wrote: >> To provide consistent memory usage history using the current >> cgroup-v2's 'swap' interface, an additional metric expressing the &g

Re: [RFC PATCH] mm: memcontrol: memory+swap accounting for cgroup-v2

2017-12-19 Thread Shakeel Butt
On Tue, Dec 19, 2017 at 7:24 AM, Tejun Heo wrote: > Hello, > > On Tue, Dec 19, 2017 at 07:12:19AM -0800, Shakeel Butt wrote: >> Yes, there are pros & cons, therefore we should give users the option >> to select the API that is better suited for their use-cases and &g

Re: [RFC PATCH] mm: memcontrol: memory+swap accounting for cgroup-v2

2017-12-19 Thread Shakeel Butt
On Tue, Dec 19, 2017 at 4:49 AM, Michal Hocko wrote: > On Mon 18-12-17 16:01:31, Shakeel Butt wrote: >> The memory controller in cgroup v1 provides the memory+swap (memsw) >> interface to account to the combined usage of memory and swap of the >> jobs. The memsw interfac

[RFC PATCH] mm: memcontrol: memory+swap accounting for cgroup-v2

2017-12-18 Thread Shakeel Butt
or disabling memsw through remounting cgroup v2, will only be effective if there are no decendants of the root cgroup. When memsw accounting is enabled then "memory.high" is comapred with memory+swap usage. So, when the allocating job's memsw usage hits its high mark, the job will be

Re: [RESEND v12 3/6] mm, oom: cgroup-aware OOM killer

2017-10-31 Thread Shakeel Butt
On Tue, Oct 31, 2017 at 9:40 AM, Johannes Weiner wrote: > On Tue, Oct 31, 2017 at 08:04:19AM -0700, Shakeel Butt wrote: >> > + >> > +static void select_victim_memcg(struct mem_cgroup *root, struct >> > oom_control *oc) >> > +{ >> > +

Re: [RESEND v12 3/6] mm, oom: cgroup-aware OOM killer

2017-10-31 Thread Shakeel Butt
> + > +static void select_victim_memcg(struct mem_cgroup *root, struct oom_control > *oc) > +{ > + struct mem_cgroup *iter; > + > + oc->chosen_memcg = NULL; > + oc->chosen_points = 0; > + > + /* > +* The oom_score is calculated for leaf memory cgroups (including > +

Re: [v10 3/6] mm, oom: cgroup-aware OOM killer

2017-10-04 Thread Shakeel Butt
>> > + if (memcg_has_children(iter)) >> > + continue; >> >> && iter != root_mem_cgroup ? > > Oh, sure. I had a stupid bug in my test script, which prevented me from > catching this. Thanks! > > This should fix the problem. > -- > diff --git a/mm/memcontrol.c b/mm

Re: [v10 3/6] mm, oom: cgroup-aware OOM killer

2017-10-04 Thread Shakeel Butt
> + > +static void select_victim_memcg(struct mem_cgroup *root, struct oom_control > *oc) > +{ > + struct mem_cgroup *iter; > + > + oc->chosen_memcg = NULL; > + oc->chosen_points = 0; > + > + /* > +* The oom_score is calculated for leaf memory cgroups (including > +

Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Shakeel Butt
On Mon, Oct 2, 2017 at 12:56 PM, Michal Hocko wrote: > On Mon 02-10-17 12:45:18, Shakeel Butt wrote: >> > I am sorry to cut the rest of your proposal because it simply goes over >> > the scope of the proposed solution while the usecase you are mentioning >> > is

Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Shakeel Butt
(Replying again as format of previous reply got messed up). On Mon, Oct 2, 2017 at 1:00 PM, Tim Hockin wrote: > In the example above: > >root >/\ > A D > / \ >B C > > Does oom_group allow me to express "compare A and D; if A is chosen > compare B and C; ki

Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Shakeel Butt
> I am sorry to cut the rest of your proposal because it simply goes over > the scope of the proposed solution while the usecase you are mentioning > is still possible. If we want to compare intermediate nodes (which seems > to be the case) then we can always provide a knob to opt-in - be it your >

Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Shakeel Butt
> Yes and nobody is disputing that, really. I guess the main disconnect > here is that different people want to have more detailed control over > the victim selection while the patchset tries to handle the most > simplistic scenario when a no userspace control over the selection is > required. And

Re: [v8 0/4] cgroup-aware OOM killer

2017-10-01 Thread Shakeel Butt
> > Going back to Michal's example, say the user configured the following: > >root > /\ > A D > / \ >B C > > A global OOM event happens and we find this: > - A > D > - B, C, D are oomgroups > > What the user is telling us is that B, C, and D are compound memory

Re: [v7 5/5] mm, oom: cgroup v2 mount option to disable cgroup-aware OOM killer

2017-09-04 Thread Shakeel Butt
On Mon, Sep 4, 2017 at 7:21 AM, Roman Gushchin wrote: > Introducing of cgroup-aware OOM killer changes the victim selection > algorithm used by default: instead of picking the largest process, > it will pick the largest memcg and then the largest process inside. > > This affects only cgroup v2 use