On Fri, 31 May 2024, Vlastimil Babka wrote:
> Patches 3 and 4 implement the static keys for the two mm fault injection
> sites in slab and page allocators. For a quick demonstration I've run a
> VM and the simple test from [1] that stresses the slab allocator and got
> this time before the series:
On Tue, 26 Dec 2023, Pasha Tatashin wrote:
> iommu allocations should be accounted in order to allow admins to
> monitor and limit the amount of iommu memory.
>
> Signed-off-by: Pasha Tatashin
> Acked-by: Michael S. Tsirkin
Acked-by: David Rientjes
On Sat, 17 Apr 2021, chukaiping wrote:
> Currently the proactive compaction order is fixed to
> COMPACTION_HPAGE_ORDER(9), it's OK in most machines with lots of
> normal 4KB memory, but it's too high for the machines with small
> normal memory, for example the machines with most memory configured
On Mon, 12 Apr 2021, chukaiping wrote:
> Currently the proactive compaction order is fixed to
> COMPACTION_HPAGE_ORDER(9), it's OK in most machines with lots of
> normal 4KB memory, but it's too high for the machines with small
> normal memory, for example the machines with most memory configured
;
> Cc: Wanpeng Li
> Signed-off-by: Sean Christopherson
Always happy to see this ambiguity (SLAB_ACCOUNT vs GFP_KERNEL_ACCOUNT)
resolved for slab allocations.
Acked-by: David Rientjes
On Mon, 22 Mar 2021, Zi Yan wrote:
> From: Zi Yan
>
> We did not have a direct user interface of splitting the compound page
> backing a THP and there is no need unless we want to expose the THP
> implementation details to users. Make /split_huge_pages accept
> a new command to do that.
>
> By
s
> open-coded implementation, just use kvmalloc().
>
> This improves the allocation speed of vmalloc(4MB) by approximately
> 5% in our benchmark. It's still dominated by the 1024 calls to
> alloc_pages_node(), which will be the subject of a later patch.
>
> Signed-off-by: M
On Wed, 24 Mar 2021, Matthew Wilcox (Oracle) wrote:
> Allow the caller of kvmalloc to specify who counts as the allocator
> of the memory instead of assuming it's the immediate caller.
>
> Signed-off-by: Matthew Wilcox (Oracle)
Acked-by: David Rientjes
On Wed, 24 Mar 2021, Matthew Wilcox (Oracle) wrote:
> People should know to cc Vlad on vmalloc-related patches. With this,
> get-maintainer.pl suggests:
>
> Uladzislau Rezki (maintainer:VMALLOC)
> Andrew Morton (maintainer:MEMORY MANAGEMENT)
> linux...@kvack.org (open list:VMALLOC)
> linux-ker
On Wed, 24 Mar 2021, Bhaskar Chowdhury wrote:
> diff --git a/mm/slub.c b/mm/slub.c
> index 3021ce9bf1b3..cd3c7be33f69 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3,7 +3,7 @@
> * SLUB: A slab allocator that limits cache line use instead of queuing
> * objects in per cpu and per node lists.
revious patch "selftests: add a kselftest for SLUB
> debugging functionality".
>
> Signed-off-by: Oliver Glitta
Very nice!
Acked-by: David Rientjes
>
> Add new option CONFIG_TEST_SLUB in Kconfig.
>
> Add parameter to function validate_slab_cache() to return
> number of errors in cache.
>
> Signed-off-by: Oliver Glitta
Acked-by: David Rientjes
On Wed, 17 Mar 2021, Vlastimil Babka wrote:
> > Greeting,
> >
> > FYI, we noticed the following commit (built with gcc-9):
> >
> > commit: e48d82b67a2b760eedf7b95ca15f41267496386c ("[PATCH 1/2] selftests:
> > add a kselftest for SLUB debugging functionality")
> > url:
> > https://github.com/0d
litta
> Fixes: ca0cab65ea2b ("mm, slub: introduce static key for slub_debug()")
> Signed-off-by: Vlastimil Babka
Acked-by: David Rientjes
> ---
> mm/slub.c | 9 +
> 1 file changed, 9 insertions(+)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 350a3
6-e9b0-0c36-ebdd-2bc684c5a...@redhat.com/#t
>
> Suggested-by: Vlastimil Babka
> Signed-off-by: Yafang Shao
> Acked-by: Vlastimil Babka
> Reviewed-by: Miaohe Lin
> Reviewed-by: Andy Shevchenko
> Reviewed-by: David Hildenbrand
> Cc: Matthew Wilcox
Acked-by: David Rientjes
ss the memcg limits (commit
> 1f14c1ac19aa4 ("mm: memcg: do not allow task about to OOM kill to bypass
> the limit")), we can again allow __GFP_NOFAIL allocations to trigger
> memcg oom-kill. This will make memcg oom behavior closer to page
> allocator oom behavior.
>
> Signed-off-by: Shakeel Butt
Acked-by: David Rientjes
6-e9b0-0c36-ebdd-2bc684c5a...@redhat.com/#t
>
> Suggested-by: Vlastimil Babka
> Signed-off-by: Yafang Shao
> Acked-by: Vlastimil Babka
> Reviewed-by: Miaohe Lin
> Reviewed-by: Andy Shevchenko
> Reviewed-by: David Hildenbrand
> Cc: Matthew Wilcox
Acked-by: David Rientjes
27;s lock.
>
> This patch should fix the above issues.
>
> Fixes: 5a811889de10 ("mm, compaction: use free lists to quickly locate a
> migration target")
> Cc:
> Signed-off-by: Vlastimil Babka
Acked-by: David Rientjes
On Tue, 16 Feb 2021, Michal Hocko wrote:
> > Hugepages can be preallocated to avoid unpredictable allocation latency.
> > If we run into 4k page shortage, the kernel can trigger OOM even though
> > there were free hugepages. When OOM is triggered by user address page
> > fault handler, we can use
e is captured in capture_control.
>
> Signed-off-by: Charan Teja Reddy
Acked-by: David Rientjes
On Tue, 9 Feb 2021, Zhiyuan Dai wrote:
> This patch adds whitespace to fix coding style issues,
> improve code reading.
>
> Signed-off-by: Zhiyuan Dai
Acked-by: David Rientjes
On Tue, 9 Feb 2021, Zhiyuan Dai wrote:
> Fixed some coding style issues, improve code reading.
> This patch adds whitespace to clearly separate the parameters.
>
> Signed-off-by: Zhiyuan Dai
Acked-by: David Rientjes
On Sun, 7 Feb 2021, Song Bao Hua (Barry Song) wrote:
> NUMA balancer is just one of many reasons for page migration. Even one
> simple alloc_pages() can cause memory migration in just single NUMA
> node or UMA system.
>
> The other reasons for page migration include but are not limited to:
> * me
On Tue, 2 Feb 2021, Charan Teja Kalla wrote:
> >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> >> index 519a60d..531f244 100644
> >> --- a/mm/page_alloc.c
> >> +++ b/mm/page_alloc.c
> >> @@ -4152,6 +4152,8 @@ __alloc_pages_direct_compact(gfp_t gfp_mask,
> >> unsigned int order,
> >>memall
On Mon, 1 Feb 2021, Dan Williams wrote:
> > > I don't have an objection to binding, but doesn't this require that the
> > > check in cxl_validate_cmd_from_user() guarantees send_cmd->size_in cannot
> > > be greater than 1MB?
> >
> > You're correct. I'd need to add:
> > cxlm->mbox.payload_size =
>
On Mon, 1 Feb 2021, Ben Widawsky wrote:
> > I haven't seen the update to 8.2.8.4.5 to know yet :)
> >
> > You make a good point of at least being able to interact with the driver.
> > I think you could argue that if the driver binds, then the payload size is
> > accepted, in which case it woul
On Mon, 1 Feb 2021, Ben Widawsky wrote:
> > I think that's what 8.2.8.4.3 says, no? And then 8.2.8.4.5 says you
> > can use up to Payload Size. That's why my recommendation was to enforce
> > this in cxl_mem_setup_mailbox() up front.
>
> Yeah. I asked our spec people to update 8.2.8.4.5 to ma
On Mon, 1 Feb 2021, Ben Widawsky wrote:
> > > > > > > > +static int cxl_mem_setup_mailbox(struct cxl_mem *cxlm)
> > > > > > > > +{
> > > > > > > > + const int cap = cxl_read_mbox_reg32(cxlm,
> > > > > > > > CXLDEV_MB_CAPS_OFFSET);
> > > > > > > > +
> > > > > > > > + cxlm->mbox.payload
On Mon, 1 Feb 2021, Ben Widawsky wrote:
> > > > > +static int cxl_mem_setup_mailbox(struct cxl_mem *cxlm)
> > > > > +{
> > > > > + const int cap = cxl_read_mbox_reg32(cxlm,
> > > > > CXLDEV_MB_CAPS_OFFSET);
> > > > > +
> > > > > + cxlm->mbox.payload_size =
> > > > > + 1 << CXL
On Mon, 1 Feb 2021, Ben Widawsky wrote:
> > > diff --git a/Documentation/ABI/testing/sysfs-bus-cxl
> > > b/Documentation/ABI/testing/sysfs-bus-cxl
> > > new file mode 100644
> > > index ..fe7b87eba988
> > > --- /dev/null
> > > +++ b/Documentation/ABI/testing/sysfs-bus-cxl
> > > @@ -0,
On Mon, 1 Feb 2021, Ben Widawsky wrote:
> On 21-01-30 15:51:49, David Rientjes wrote:
> > On Fri, 29 Jan 2021, Ben Widawsky wrote:
> >
> > > +static int cxl_mem_setup_mailbox(struct cxl_mem *cxlm)
> > > +{
> > > + const int cap = cxl_r
On Mon, 1 Feb 2021, Miaohe Lin wrote:
> >> Hugepage size in unit of GB is supported. We could show pagesize in unit of
> >> GB to make it more friendly to read. Also rework the calculation code of
> >> page size unit to make it more readable.
> >>
> >> Signed-off-by: Miaohe Lin
> >> ---
> >> fs/
On Mon, 1 Feb 2021, Miaohe Lin wrote:
> The helper range_in_vma() is introduced via commit 017b1660df89 ("mm:
> migration: fix migration of huge PMD shared pages"). But we forgot to
> use it in __split_huge_pud_locked() and __split_huge_pmd_locked().
>
> Signed-off-by: Miaohe Lin
> ---
> mm/hug
On Mon, 1 Feb 2021, Charan Teja Reddy wrote:
> By defination, COMPACT[STALL|FAIL] events needs to be counted when there
s/defination/definition/
> is 'At least in one zone compaction wasn't deferred or skipped from the
> direct compaction'. And when compaction is skipped or deferred,
> COMPACT_S
On Mon, 25 Jan 2021, Dave Hansen wrote:
> diff -puN mm/migrate.c~0006-node-Define-and-export-memory-migration-path
> mm/migrate.c
> --- a/mm/migrate.c~0006-node-Define-and-export-memory-migration-path
> 2021-01-25 16:23:09.553866709 -0800
> +++ b/mm/migrate.c2021-01-25 16:23:09.558866709 -0
On Mon, 25 Jan 2021, Dave Hansen wrote:
> This also contains a few prerequisite patches that fix up an issue
> with the vm.zone_reclaim_mode sysctl ABI.
>
I think these patches (patches 1-3) can be staged in -mm now since they
fix vm.zone_reclaim_mode correctness and consistency.
Andrew, would
: "Tobin C. Harding"
> Cc: Christoph Lameter
> Cc: Andrew Morton
> Cc: Huang Ying
> Cc: Dan Williams
> Cc: Qian Cai
> Cc: Daniel Wagner
> Cc: osalvador
Acked-by: David Rientjes
d0
> [ 9556.711404] amdgpu_drm_ioctl+0x49/0x80 [amdgpu]
> [ 9556.711411] __x64_sys_ioctl+0x83/0xb0
> [ 9556.711417] do_syscall_64+0x33/0x80
> [ 9556.711421] entry_SYSCALL_64_after_hwframe+0x44/0xa9
>
> Fixes: bf9eee249ac2 ("drm/ttm: stop using GFP_TRANSHUGE_LIGHT")
> Signed-off-by: Michel Dänzer
Acked-by: David Rientjes
Mikhail Gavrilov reported the same issue.
On Sat, 30 Jan 2021, David Rientjes wrote:
> On Sun, 31 Jan 2021, Mikhail Gavrilov wrote:
>
> > The 5.11-rc5 (git 76c057c84d28) brought a new issue.
> > Now the kernel log is flooded with the message "page allocation failure".
> >
> > Trace:
> >
On Sun, 31 Jan 2021, Mikhail Gavrilov wrote:
> The 5.11-rc5 (git 76c057c84d28) brought a new issue.
> Now the kernel log is flooded with the message "page allocation failure".
>
> Trace:
> msedge:cs0: page allocation failure: order:10,
Order-10, wow!
ttm_pool_alloc() will start at order-10 and
On Fri, 29 Jan 2021, Ben Widawsky wrote:
> +static int cxl_mem_setup_mailbox(struct cxl_mem *cxlm)
> +{
> + const int cap = cxl_read_mbox_reg32(cxlm, CXLDEV_MB_CAPS_OFFSET);
> +
> + cxlm->mbox.payload_size =
> + 1 << CXL_GET_FIELD(cap, CXLDEV_MB_CAP_PAYLOAD_SIZE);
> +
> + /
On Fri, 29 Jan 2021, Ben Widawsky wrote:
> diff --git a/Documentation/ABI/testing/sysfs-bus-cxl
> b/Documentation/ABI/testing/sysfs-bus-cxl
> new file mode 100644
> index ..fe7b87eba988
> --- /dev/null
> +++ b/Documentation/ABI/testing/sysfs-bus-cxl
> @@ -0,0 +1,26 @@
> +What:
On Fri, 29 Jan 2021, Ben Widawsky wrote:
> Provide enough functionality to utilize the mailbox of a memory device.
> The mailbox is used to interact with the firmware running on the memory
> device.
>
> The CXL specification defines separate capabilities for the mailbox and
> the memory device. T
device(pdev);
> + if (rc)
> + return rc;
>
> regloc = cxl_mem_dvsec(pdev, PCI_DVSEC_ID_CXL_REGLOC);
> if (!regloc) {
> dev_err(dev, "register location dvsec not found\n");
> return -ENXIO;
> }
> + regloc += 0xc; /* Skip DVSEC + reserved fields */
Assuming the DVSEC revision number is always 0x0 or there's no value in
storing this in struct cxl_mem for the future.
Acked-by: David Rientjes
d management of
> + devices supporting these protocols.
> +
> +if CXL_BUS
> +
> +config CXL_MEM
> + tristate "CXL.mem: Endpoint Support"
Nit: "CXL.mem: Memory Devices" or "CXL Memory Devices: CXL.mem" might look
better, but feel free to ignore.
Acked-by: David Rientjes
On Sat, 30 Jan 2021, Miaohe Lin wrote:
> Hugepage size in unit of GB is supported. We could show pagesize in unit of
> GB to make it more friendly to read. Also rework the calculation code of
> page size unit to make it more readable.
>
> Signed-off-by: Miaohe Lin
> ---
> fs/hugetlbfs/inode.c |
: 1972064 kB
>
> Note: We print even without CONFIG_CMA, just like "nr_free_cma"; this way,
> one can be sure when spotting "cma 0", that there are definetly no
> CMA pages located in a zone.
>
> Cc: Andrew Morton
> Cc: Thomas Gleixne
t;slowdowns).
>
> Introduce and use a new common function for doing this and eliminate
> all functions-duplicates from drivers.
>
> Suggested-by: David Rientjes
> Signed-off-by: Alexander Lobakin
Looks even better than I thought!
(Since all of the changes are in drivers/net/etherne
y: Jesper Dangaard Brouer
> Reviewed-by: Ilias Apalodimas
Acked-by: David Rientjes
On Wed, 27 Jan 2021, Alexander Lobakin wrote:
> The function doesn't write anything to the page struct itself,
> so this argument can be const.
>
> Misc: align second argument to the brace while at it.
>
> Signed-off-by: Alexander Lobakin
Acked-by: David Rientjes
On Wed, 27 Jan 2021, Alexander Lobakin wrote:
> The function only tests for page->index, so its argument should be
> const.
>
> Signed-off-by: Alexander Lobakin
Acked-by: David Rientjes
flags=0x17c0010200
>
> While after this change, the output is,
> [ 8846.517809] INFO: Slab 0xf42a2c60 objects=33 used=3
> fp=0x60d32ca8 flags=0x17c0010200(slab|head)
>
> Reviewed-by: David Hildenbrand
> Signed-off-by: Yafang Shao
Acked-by: David Rientjes
On Thu, 28 Jan 2021, David Hildenbrand wrote:
> > On Thu, 28 Jan 2021, David Hildenbrand wrote:
> >
> >> diff --git a/mm/vmstat.c b/mm/vmstat.c
> >> index 7758486097f9..957680db41fa 100644
> >> --- a/mm/vmstat.c
> >> +++ b/mm/vmstat.c
> >> @@ -1650,6 +1650,11 @@ static void zoneinfo_show_print(s
On Thu, 28 Jan 2021, David Hildenbrand wrote:
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index 7758486097f9..957680db41fa 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -1650,6 +1650,11 @@ static void zoneinfo_show_print(struct seq_file *m,
> pg_data_t *pgdat,
> zone->spanne
; anymore.
>
> Signed-off-by: Vlastimil Babka
Acked-by: David Rientjes
On Thu, 21 Jan 2021, Sean Christopherson wrote:
> True, but the expected dual-usage is more about backwards compatibility than
> anything else. Running an SEV-ES VM requires a heavily enlightened guest
> vBIOS
> and kernel, which means that a VM that was created as an SEV guest cannot
> easily
On Tue, 26 Jan 2021, Muchun Song wrote:
> > I'm not sure that Kconfig is the right place to document functional
> > behavior of the kernel, especially for non-configurable options. Seems
> > like this is already served by existing comments added by this patch
> > series in the files where the des
On Mon, 25 Jan 2021, Muchun Song wrote:
> > >> I'm not sure I understand the rationale for providing this help text if
> > >> this is def_bool depending on CONFIG_HUGETLB_PAGE. Are you intending
> > >> that
> > >> this is actually configurable and we want to provide guidance to the
> > >> admin
On Mon, 25 Jan 2021, Alexander Lobakin wrote:
> Constify "page" argument for page_is_pfmemalloc() users where applicable.
>
> Signed-off-by: Alexander Lobakin
> ---
> drivers/net/ethernet/hisilicon/hns3/hns3_enet.c | 2 +-
> drivers/net/ethernet/intel/fm10k/fm10k_main.c | 2 +-
> drivers/
Poison later.
>
> Signed-off-by: Muchun Song
> Reviewed-by: Oscar Salvador
Acked-by: David Rientjes
On Sun, 17 Jan 2021, Muchun Song wrote:
> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> index ce4be1fa93c2..3b146d5949f3 100644
> --- a/mm/sparse-vmemmap.c
> +++ b/mm/sparse-vmemmap.c
> @@ -29,6 +29,7 @@
> #include
> #include
> #include
> +#include
>
> #include
> #include
>
On Sun, 17 Jan 2021, Muchun Song wrote:
> The HUGETLB_PAGE_FREE_VMEMMAP option is used to enable the freeing
> of unnecessary vmemmap associated with HugeTLB pages. The config
> option is introduced early so that supporting code can be written
> to depend on the option. The initial version of the
On Sun, 17 Jan 2021, Muchun Song wrote:
> In the subsequent patch, we should allocate the vmemmap pages when
> freeing HugeTLB pages. But update_and_free_page() is always called
> with holding hugetlb_lock, so we cannot use GFP_KERNEL to allocate
> vmemmap pages. However, we can defer the actual f
On Wed, 20 Jan 2021, Vlastimil Babka wrote:
> On 1/19/21 8:26 PM, David Rientjes wrote:
> > On Mon, 18 Jan 2021, Charan Teja Reddy wrote:
> >
> >> should_proactive_compact_node() returns true when sum of the
> >> weighted fragmentation score of all the zones in
ed
> average of is above wmark_high, then individual score (not adjusted) of
> atleast one zone has to be above wmark_high. Thus it avoids the
> unnecessary trigger and deferrals of the proactive compaction.
>
> Fix-suggested-by: Vlastimil Babka
Suggested-by
> Signed-off-by: Charan Teja Reddy
Acked-by: David Rientjes
On Fri, 15 Jan 2021, Tang Yizhou wrote:
> If p is a kthread, it will be checked in oom_unkillable_task() so
> we can delete the corresponding comment.
>
> Signed-off-by: Tang Yizhou
Acked-by: David Rientjes
other* stack trace, so the overhead adds up, and on my tests (on
> ARCH=um, admittedly) 2/3rds of the allocations end up being doing
> the stack tracing.
>
> Turn off SLAB_STORE_USER if SLAB_NOLEAKTRACE was given, to avoid
> storing the essentially same data twice.
>
> Signed-
On Thu, 14 Jan 2021, Vlastimil Babka wrote:
> On 1/8/21 7:46 PM, Christoph Lameter wrote:
> > I am ok with you as a slab maintainer. I have seen some good work from
> > you.
> >
> > Acked-by: Christoph Lameter
>
> Thanks!
>
Acked-by: David Rientjes
Great addition!
> > Signed-off-by: Hyunwook (Wooky) Baek
> > Acked-by: David Rientjes
> > ---
> >
> > This patch is tested by invoking INSB/OUTSB instructions in kernel space in
> > a
> > SEV-ES-enabled VM. Without the patch, the kernel crashed with the following
> >
pported.
> > > + */
> > > + if (!capable(CAP_SYS_NICE)) {
> > > + ret = -EPERM;
> > > + goto release_task;
> >
> > mmput?
>
> Ouch! Thanks for pointing it out! Will include in the next respin.
>
With the fix, feel free to add:
Acked-by: David Rientjes
Thanks Suren!
ial() fails and new_slab_objects() falls back to
> new_slab(), allocating new pages. This could lead to an unnecessary
> increase in memory fragmentation.
>
> Fixes: 7ced37197196 ("slub: Acquire_slab() avoid loop")
> Signed-off-by: Jann Horn
Acked-by: David Rientjes
Indee
erased objects, similarly
> to CONFIG_SLUB=y behavior.
>
> Signed-off-by: Alexander Popov
> Reviewed-by: Alexander Potapenko
Acked-by: David Rientjes
27;t care about those). While using the 'mapping' name would automagically
> keep the code correct if the unions in struct page changed, such changes
> should
> be done consciously and needed changes evaluated - the comment should help
> with
> that.
>
> Signed-off-by: Vlastimil Babka
Acked-by: David Rientjes
On Thu, 10 Dec 2020, Christian Borntraeger wrote:
> > * However, the boilerplate to usefulness ratio doesn't look too good and I
> > wonder whether what we should do is adding a generic "misc" controller
> > which can host this sort of static hierarchical counting. I'll think more
> > on it.
On Wed, 9 Dec 2020, Brijesh Singh wrote:
> Noted, I will send v2 with these fixed.
>
And with those changes:
Acked-by: David Rientjes
Thanks Brijesh!
On Tue, 24 Nov 2020, Vipin Sharma wrote:
> > > Looping Janosch and Christian back into the thread.
> > >
> > >
> > >
> > > I interpret this suggestion as
On Fri, 20 Nov 2020, Pavel Tatashin wrote:
> Recently, I encountered a hang that is happening during memory hot
> remove operation. It turns out that the hang is caused by pinned user
> pages in ZONE_MOVABLE.
>
> Kernel expects that all pages in ZONE_MOVABLE can be migrated, but
> this is not the
this heuristic. So in case this
> > change regresses somebody's performance, there's a way around it and thus
> > the risk is low IMHO.
>
> I agree. For the absolute majority of users there will be no difference.
> And there is a good workaround for the rest.
>
> Acked-by: Roman Gushchin
>
Acked-by: David Rientjes
On Mon, 2 Nov 2020, Sean Christopherson wrote:
> On Fri, Oct 02, 2020 at 01:48:10PM -0700, Vipin Sharma wrote:
> > On Fri, Sep 25, 2020 at 03:22:20PM -0700, Vipin Sharma wrote:
> > > I agree with you that the abstract name is better than the concrete
> > > name, I also feel that we must provide HW
P_STACK when the thread stack
> size is smaller than the PAGE_SIZE.
>
> Fixes: ec9f02384f60 ("mm: workingset: fix vmstat counters for shadow nodes")
> Signed-off-by: Muchun Song
> Acked-by: Roman Gushchin
Acked-by: David Rientjes
I assume that without this fix that the
mcg: deprecate the non-hierarchical mode
> docs: cgroup-v1: reflect the deprecation of the non-hierarchical mode
> cgroup: remove obsoleted broken_hierarchy and warned_broken_hierarchy
>
For all three patches:
Acked-by: David Rientjes
Very welcome change to see, we've always prevented the non-hierarchical
mode from being set in our kernel.
On Tue, 3 Nov 2020, kernel test robot wrote:
> Greeting,
>
> FYI, we noticed the following commit (built with gcc-9):
>
> commit: 1ea6c22c9b85ec176bb78d7076be06a4142f8bdd ("[PATCH 1/2] cma: redirect
> page allocation to CMA")
> url:
> https://github.com/0day-ci/linux/commits/Chris-Goldsworthy/
On Wed, 21 Oct 2020, kernel test robot wrote:
> Greeting,
>
> FYI, we noticed a 87.8% improvement of vm-scalability.throughput due to
> commit:
>
>
> commit: 7fef431be9c9ac255838a9578331567b9dba4477 ("mm/page_alloc: place pages
> to tail in __free_pages_core()")
> https://git.kernel.org/cgit/
oup. We already track file
> THP and shmem THP per node, so making them per-cgroup is only a matter
> of switching from node to lruvec counters. All callsites are in places
> where the pages are charged and locked, so page->memcg is stable.
>
> Signed-off-by: Johannes Weiner
Acked-by: David Rientjes
Nice!
On Tue, 20 Oct 2020, Huang, Ying wrote:
> >> =
> >> compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase/ucode:
> >>
> >> gcc-9/performance/x86_64-rhel-8.3/debian-10.4-x86_64-202006
include/trace/events/
> [2]: check_synth_field() in kernel/trace/trace_events_hist.c
>
> Acked-by: Michel Lespinasse
> Signed-off-by: Axel Rasmussen
Acked-by: David Rientjes
effectively no overhead unless tracepoints are enabled at
> runtime. If tracepoints are enabled, there is a performance impact, but
> how much depends on exactly what e.g. the BPF program does.
>
> Signed-off-by: Axel Rasmussen
Acked-by: David Rientjes
g_state()
> from int to enum node_stat_item.
>
> Signed-off-by: Muchun Song
Acked-by: David Rientjes
On Sun, 4 Oct 2020, kernel test robot wrote:
> Greeting,
>
> FYI, we noticed a -8.7% regression of vm-scalability.throughput due to commit:
>
>
> commit: 85b9f46e8ea451633ccd60a7d8cacbfff9f34047 ("mm, thp: track fallbacks
> due to failed memcg charges separately")
> https://git.kernel.org/cgit
Paolo, ping?
On Tue, 25 Aug 2020, David Rientjes wrote:
> There may be many encrypted regions that need to be unregistered when a
> SEV VM is destroyed. This can lead to soft lockups. For example, on a
> host running 4.15:
>
> watchdog: BUG: soft lockup - CPU#20
n type
>list_add(&page->lru, &pcp->lists[migratetype]);
> // add new page to already drained pcp list
>
> Thread#2
> Never drains pcp again, and therefore gets stuck in the loop.
>
> The fix is to try to drain per-cpu lists again after
> check_pages_isolated_cb() fails.
>
> Signed-off-by: Pavel Tatashin
> Cc: sta...@vger.kernel.org
Acked-by: David Rientjes
el/mca_recovery.ko] undefined!
> ERROR: "max_low_pfn" [arch/ia64/kernel/mca_recovery.ko] undefined!
>
> David suggested just exporting min_low_pfn & max_low_pfn in
> mm/memblock.c:
> https://lore.kernel.org/lkml/alpine.deb.2.22.394.2006291911220.1118...@chino.kir.corp.google.
kernel.org/lkml/20200630111519.ga1951...@linux.ibm.com/
>
> David suggested just exporting min_low_pfn & max_low_pfn in
> mm/memblock.c:
> https://lore.kernel.org/lkml/alpine.deb.2.22.394.2006291911220.1118...@chino.kir.corp.google.com/
>
> Reported-by: kernel test robot
>
On Sat, 29 Aug 2020, Christoph Hellwig wrote:
> > Just adding Christoph to the participants list, since at a guess it's
> > due to his changes whether they came from the nvme side or the dma
> > side..
> >
> > Christoph?
>
> This kinda looks like the sqsize regression we had in earlier 5.9-rc,
>
are no other changes that can prevent soft lockups for
very large SEV VMs in the latest kernel.
Periodically schedule if necessary. This still holds kvm->lock across the
resched, but since this only happens when the VM is destroyed this is
assumed to be acceptable.
Signed-off-by: David Rie
= min(pcp->count, count);
> >>while (count) {
> >>struct list_head *list;
> >>
> >>
> >
> > Fixes: and Cc: stable... tags?
>
> Fixes: 5f8dcc21211a ("page-allocator: split per-cpu list into
> one-list-per-migrate-type")
> Cc: [2.6+]
>
Acked-by: David Rientjes
n line of empty zone, while the parser used
> in the test relies on the protection line to mark the end of each zone.
>
> Let's revert it to avoid breaking userspace testing or applications.
>
Acked-by: David Rientjes
No objection since I noted userspace parsing as a potential ris
y(!object || !node_match(page, node))) {
> object = __slab_alloc(s, gfpflags, node, addr, c);
> - stat(s, ALLOC_SLOWPATH);
> } else {
> void *next_object = get_freepointer_safe(s, object);
>
Acked-by: David Rientjes
> --
> 2.28.0.windows.1
Lol :)
On Mon, 10 Aug 2020, wuyun...@huawei.com wrote:
> From: Abel Wu
>
> The commit below is incomplete, as it didn't handle the add_full() part.
> commit a4d3f8916c65 ("slub: remove useless kmem_cache_debug() before
> remove_full()")
>
> Signed-off-by: Abel Wu
> ---
> mm/slub.c | 4 +++-
> 1 fil
On Mon, 10 Aug 2020, Charan Teja Reddy wrote:
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index e4896e6..25e7e12 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3106,6 +3106,7 @@ static void free_unref_page_commit(struct page *page,
> unsigned long pfn)
> struct zone *zo
1 - 100 of 2306 matches
Mail list logo