On Wed, 3 Jul 2019, Waiman Long wrote:
> On 7/3/19 2:56 AM, Michal Hocko wrote:
> > On Tue 02-07-19 14:37:30, Waiman Long wrote:
> >> Currently, a value of '1" is written to /sys/kernel/slab//shrink
> >> file to shrink the slab by flushing all the per-cpu slabs and free
> >> slabs in partial lists
On Thu, 27 Jun 2019, Roman Gushchin wrote:
> so that objects belonging to different memory cgroups can share the same page
> and kmem_caches.
>
> It's a fairly big change though.
Could this be done at another level? Put a cgoup pointer into the
corresponding structures and then go back to just a
On Mon, 19 Nov 2018, Jerome Glisse wrote:
> > IIRC this is solved in IB by automatically calling
> > madvise(MADV_DONTFORK) before creating the MR.
> >
> > MADV_DONTFORK
> > .. This is useful to prevent copy-on-write semantics from changing the
> > physical location of a page if the parent wri
On Fri, 16 Feb 2018, Matthew Wilcox wrote:
> On Fri, Feb 16, 2018 at 09:44:25AM -0600, Christopher Lameter wrote:
> > On Thu, 15 Feb 2018, Matthew Wilcox wrote:
> > > What I was proposing was an intermediate page allocator where slab would
> > > request 2MB for its
On Thu, 15 Feb 2018, Matthew Wilcox wrote:
> > The inducing of releasing memory back is not there but you can run SLUB
> > with MAX_ORDER allocations by passing "slab_min_order=9" or so on bootup.
>
> This is subtly different from the idea that I had. If you set
> slub_min_order to 9, then slub w
On Thu, 15 Feb 2018, Matthew Wilcox wrote:
> On Thu, Feb 15, 2018 at 09:49:00AM -0600, Christopher Lameter wrote:
> > On Thu, 15 Feb 2018, Matthew Wilcox wrote:
> >
> > > What if ... on startup, slab allocated a MAX_ORDER page for itself.
> > > It would the
On Thu, 15 Feb 2018, Matthew Wilcox wrote:
> What if ... on startup, slab allocated a MAX_ORDER page for itself.
> It would then satisfy its own page allocation requests from this giant
> page. If we start to run low on memory in the rest of the system, slab
> can be induced to return some of it
On Tue, 7 Nov 2017, Chris Metcalf wrote:
> > Presumably we have another context there were we may be able to call into
> > the cleanup code with interrupts enabled.
>
> Right now for task isolation we run with interrupts enabled during the
> initial sys_prctl() call, and call quiet_vmstat_sync() t
On Mon, 6 Nov 2017, Chris Metcalf wrote:
> On 11/6/2017 10:38 AM, Christopher Lameter wrote:
> > > What about that d*mn 1 Hz clock?
> > >
> > > It's still there, so this code still requires some further work before
> > > it can actually get a pr
On Fri, 3 Nov 2017, Chris Metcalf wrote:
> However, it doesn't seem possible to do the synchronous cancellation of
> the vmstat deferred work with irqs disabled, though if there's a way,
> it would be a little cleaner to do that; Christoph? We can certainly
> update the statistics with interrupts
On Fri, 20 Oct 2017, changbin...@intel.com wrote:
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 269b5df..2a960fc 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -501,6 +501,43 @@ void prep_transhuge_page(struct page *page)
> set_compound_page_dtor(page, TRANSHUGE_P
On Thu, 7 Sep 2017, David Rientjes wrote:
> > It has *nothing* to do with zillions of tasks. Its amusing that the SGI
> > ghost is still haunting the discussion here. The company died a couple of
> > years ago finally (ok somehow HP has an "SGI" brand now I believe). But
> > there are multiple com
On Thu, 7 Sep 2017, Roman Gushchin wrote:
> On Thu, Sep 07, 2017 at 10:03:24AM -0500, Christopher Lameter wrote:
> > On Thu, 7 Sep 2017, Roman Gushchin wrote:
> >
> > > > Really? From what I know and worked on way back when: The reason was to
> > > >
On Wed, 6 Sep 2017, Michal Hocko wrote:
> I am not sure this is how things evolved actually. This is way before
> my time so my git log interpretation might be imprecise. We do have
> oom_badness heuristic since out_of_memory has been introduced and
> oom_kill_allocating_task has been introduced m
On Tue, 5 Sep 2017, Michal Hocko wrote:
> I would argue that we should simply deprecate and later drop the sysctl.
> I _strongly_ suspect anybody is using this. If yes it is not that hard
> to change the kernel command like rather than select the sysctl. The
> deprecation process would be
>
On Mon, 4 Sep 2017, Roman Gushchin wrote
> To address these issues, cgroup-aware OOM killer is introduced.
You are missing a major issue here. Processes may have allocation
constraints to memory nodes, special DMA zones etc etc. OOM conditions on
such resource constricted allocations need to be d
On Thu, 7 Sep 2017, Roman Gushchin wrote:
> > Really? From what I know and worked on way back when: The reason was to be
> > able to contain the affected application in a cpuset. Multiple apps may
> > have been running in multiple cpusets on a large NUMA machine and the OOM
> > condition in one cp
On Wed, 6 Sep 2017, David Rientjes wrote:
> > The oom_kill_allocating_task sysctl which causes the OOM killer
> > to simple kill the allocating task is useless. Killing the random
> > task is not the best idea.
> >
> > Nobody likes it, and hopefully nobody uses it.
> > We want to completely deprec
18 matches
Mail list logo