On Mon, Oct 14, 2024 at 07:10:56PM GMT, Suren Baghdasaryan wrote:
> On Mon, Oct 14, 2024 at 4:51 PM Andrew Morton
> wrote:
> >
> > On Mon, 14 Oct 2024 13:36:43 -0700 Suren Baghdasaryan
> > wrote:
> >
> > > When a module gets unloaded there is a possibility that some of the
> > > allocations it
memory)
irrespective of the presence of swap".
On Thu, Dec 21, 2017 at 9:29 AM, Tejun Heo wrote:
> Hello, Shakeel.
>
> On Thu, Dec 21, 2017 at 07:22:20AM -0800, Shakeel Butt wrote:
>> I am claiming memory allocations under global pressure will be
>> affected by the perf
> The swap (and its performance) is and should be transparent
> to the job owners.
Please ignore this statement, I didn't mean to claim on the
independence of job performance and underlying swap performance, sorry
about that.
I meant to say that the amount of anon memory a job can allocate
should
On Thu, Dec 21, 2017 at 5:37 AM, Tejun Heo wrote:
> Hello, Shakeel.
>
> On Wed, Dec 20, 2017 at 05:15:41PM -0800, Shakeel Butt wrote:
>> Let's say we have a job that allocates 100 MiB memory and suppose 80
>> MiB is anon and 20 MiB is non-anon (file & kmem).
>>
On Wed, Dec 20, 2017 at 3:36 PM, Tejun Heo wrote:
> Hello, Shakeel.
>
> On Wed, Dec 20, 2017 at 12:15:46PM -0800, Shakeel Butt wrote:
>> > I don't understand how this invariant is useful across different
>> > backing swap devices and availability. e.g. Our OOM
On Wed, Dec 20, 2017 at 12:15 PM, Shakeel Butt wrote:
> On Wed, Dec 20, 2017 at 11:37 AM, Tejun Heo wrote:
>> Hello, Shakeel.
>>
>> On Tue, Dec 19, 2017 at 02:39:19PM -0800, Shakeel Butt wrote:
>>> Suppose a user wants to run multiple instances of a specific job on
On Wed, Dec 20, 2017 at 11:37 AM, Tejun Heo wrote:
> Hello, Shakeel.
>
> On Tue, Dec 19, 2017 at 02:39:19PM -0800, Shakeel Butt wrote:
>> Suppose a user wants to run multiple instances of a specific job on
>> different datacenters and s/he has budget of 100MiB for each insta
On Tue, Dec 19, 2017 at 1:41 PM, Tejun Heo wrote:
> Hello,
>
> On Tue, Dec 19, 2017 at 10:25:12AM -0800, Shakeel Butt wrote:
>> Making the runtime environment, an invariant is very critical to make
>> the management of a job easier whose instances run on different
>>
On Tue, Dec 19, 2017 at 9:33 AM, Tejun Heo wrote:
> Hello,
>
> On Tue, Dec 19, 2017 at 09:23:29AM -0800, Shakeel Butt wrote:
>> To provide consistent memory usage history using the current
>> cgroup-v2's 'swap' interface, an additional metric expressing the
&g
On Tue, Dec 19, 2017 at 7:24 AM, Tejun Heo wrote:
> Hello,
>
> On Tue, Dec 19, 2017 at 07:12:19AM -0800, Shakeel Butt wrote:
>> Yes, there are pros & cons, therefore we should give users the option
>> to select the API that is better suited for their use-cases and
&g
On Tue, Dec 19, 2017 at 4:49 AM, Michal Hocko wrote:
> On Mon 18-12-17 16:01:31, Shakeel Butt wrote:
>> The memory controller in cgroup v1 provides the memory+swap (memsw)
>> interface to account to the combined usage of memory and swap of the
>> jobs. The memsw interfac
or disabling memsw through remounting cgroup v2, will only be effective
if there are no decendants of the root cgroup.
When memsw accounting is enabled then "memory.high" is comapred with
memory+swap usage. So, when the allocating job's memsw usage hits its
high mark, the job will be
On Tue, Oct 31, 2017 at 9:40 AM, Johannes Weiner wrote:
> On Tue, Oct 31, 2017 at 08:04:19AM -0700, Shakeel Butt wrote:
>> > +
>> > +static void select_victim_memcg(struct mem_cgroup *root, struct
>> > oom_control *oc)
>> > +{
>> > +
> +
> +static void select_victim_memcg(struct mem_cgroup *root, struct oom_control
> *oc)
> +{
> + struct mem_cgroup *iter;
> +
> + oc->chosen_memcg = NULL;
> + oc->chosen_points = 0;
> +
> + /*
> +* The oom_score is calculated for leaf memory cgroups (including
> +
>> > + if (memcg_has_children(iter))
>> > + continue;
>>
>> && iter != root_mem_cgroup ?
>
> Oh, sure. I had a stupid bug in my test script, which prevented me from
> catching this. Thanks!
>
> This should fix the problem.
> --
> diff --git a/mm/memcontrol.c b/mm
> +
> +static void select_victim_memcg(struct mem_cgroup *root, struct oom_control
> *oc)
> +{
> + struct mem_cgroup *iter;
> +
> + oc->chosen_memcg = NULL;
> + oc->chosen_points = 0;
> +
> + /*
> +* The oom_score is calculated for leaf memory cgroups (including
> +
On Mon, Oct 2, 2017 at 12:56 PM, Michal Hocko wrote:
> On Mon 02-10-17 12:45:18, Shakeel Butt wrote:
>> > I am sorry to cut the rest of your proposal because it simply goes over
>> > the scope of the proposed solution while the usecase you are mentioning
>> > is
(Replying again as format of previous reply got messed up).
On Mon, Oct 2, 2017 at 1:00 PM, Tim Hockin wrote:
> In the example above:
>
>root
>/\
> A D
> / \
>B C
>
> Does oom_group allow me to express "compare A and D; if A is chosen
> compare B and C; ki
> I am sorry to cut the rest of your proposal because it simply goes over
> the scope of the proposed solution while the usecase you are mentioning
> is still possible. If we want to compare intermediate nodes (which seems
> to be the case) then we can always provide a knob to opt-in - be it your
>
> Yes and nobody is disputing that, really. I guess the main disconnect
> here is that different people want to have more detailed control over
> the victim selection while the patchset tries to handle the most
> simplistic scenario when a no userspace control over the selection is
> required. And
>
> Going back to Michal's example, say the user configured the following:
>
>root
> /\
> A D
> / \
>B C
>
> A global OOM event happens and we find this:
> - A > D
> - B, C, D are oomgroups
>
> What the user is telling us is that B, C, and D are compound memory
On Mon, Sep 4, 2017 at 7:21 AM, Roman Gushchin wrote:
> Introducing of cgroup-aware OOM killer changes the victim selection
> algorithm used by default: instead of picking the largest process,
> it will pick the largest memcg and then the largest process inside.
>
> This affects only cgroup v2 use
22 matches
Mail list logo