On Wed, 1 May 2024 02:01:20 -0400 Michael S. Tsirkin
>
> and then it failed testing.
>
So did my patch [1] but then the reason was spotted [2,3]
[1] https://lore.kernel.org/lkml/20240430110209.4310-1-hdan...@sina.com/
[2] https://lore.kernel.org/lkml/20240430225005.4368-1-hdan...@sina.com/
[3]
On Tue, Apr 30, 2024 at 11:23:04AM -0500, Mike Christie wrote:
> On 4/30/24 8:05 AM, Edward Adam Davis wrote:
> > static int vhost_task_fn(void *data)
> > {
> > struct vhost_task *vtsk = data;
> > @@ -51,7 +51,7 @@ static int vhost_task_fn(void *data)
> > schedule();
> >
On Sat, 03 Feb 2024 14:16:16 +0800 Ubisectech Sirius
> Hello.
> We are Ubisectech Sirius Team, the vulnerability lab of China ValiantSec.
> Recently, our team has discovered a issue in Linux kernel
> 6.8.0-rc2-g6764c317b6bb.
> Attached to the email were a POC file of the issue.
Could you test if
On 23 Dec 2022 15:51:52 +0900 Daisuke Matsuda
> @@ -137,15 +153,27 @@ void rxe_sched_task(struct rxe_task *task)
> if (task->destroyed)
> return;
>
> - tasklet_schedule(&task->tasklet);
> + /*
> + * busy-loop while qp reset is in progress.
> + * This may be
On Tue, 16 Apr 2019 20:38:34 +0200 Christian König wrote:
> + /**
> + * @unpin_dma_buf:
> + *
> + * This is called by dma_buf_unpin and lets the exporter know that an
> + * importer doesn't need to the DMA-buf to stay were it is any more.
> + *
s/need to/need/ s/were/
On Tue, 16 Apr 2019 20:38:35 +0200 Christian König wrote:
> @ -331,14 +282,19 @@ EXPORT_SYMBOL(drm_gem_map_dma_buf);
> * @sgt: scatterlist info of the buffer to unmap
> * @dir: direction of DMA transfer
> *
> - * Not implemented. The unmap is done at drm_gem_map_detach(). This can be
> - *
On Tue, 16 Apr 2019 20:38:32 +0200 Christian König wrote:
> @@ -688,9 +689,9 @@ struct sg_table *dma_buf_map_attachment(struct
> dma_buf_attachment *attach,
> if (attach->sgt)
> return attach->sgt;
>
> - sg_table = attach->dmabuf->ops->map_dma_buf(attach, direction);
> -
On Tue, 16 Apr 2019 20:38:33 +0200 Christian König wrote:
> Each importer can now provide an invalidate_mappings callback.
>
> This allows the exporter to provide the mappings without the need to pin
> the backing store.
>
> v2: don't try to invalidate mappings when the callback is NULL,
> l
On Tue, 16 Apr 2019 20:38:31 +0200 Christian König wrote:
> Add function variants which can be called with the reservation lock
> already held.
>
> v2: reordered, add lockdep asserts, fix kerneldoc
> v3: rebased on sgt caching
>
> Signed-off-by: Christian König
> ---
> drivers/dma-buf/dma-buf.
On April 11, 2017 10:06 PM Vlastimil Babka wrote:
>
> static void cpuset_change_task_nodemask(struct task_struct *tsk,
> nodemask_t *newmems)
> {
> - bool need_loop;
> -
> task_lock(tsk);
> - /*
> - * Determine if a loop is necessary if a
x150
> syscall_return_slowpath+0x184/0x1c0
> entry_SYSCALL_64_fastpath+0xab/0xad
>
> Reported-by: Vegard Nossum
> Signed-off-by: Mike Kravetz
> ---
Acked-by: Hillf Danton
On April 10, 2017 5:54 PM Xishi Qiu wrote:
> On 2017/4/10 17:37, Hillf Danton wrote:
>
> > On April 10, 2017 4:57 PM Xishi Qiu wrote:
> >> On 2017/4/10 14:42, Hillf Danton wrote:
> >>
> >>> On April 08, 2017 9:40 PM zhong Jiang wrote:
> >>>&
On April 10, 2017 4:57 PM Xishi Qiu wrote:
> On 2017/4/10 14:42, Hillf Danton wrote:
>
> > On April 08, 2017 9:40 PM zhong Jiang wrote:
> >>
> >> when runing the stabile docker cases in the vm. The following issue will
> >> come up.
> >>
On April 08, 2017 9:40 PM zhong Jiang wrote:
>
> when runing the stabile docker cases in the vm. The following issue will
> come up.
>
> #40 [8801b57ffb30] async_page_fault at 8165c9f8
> [exception RIP: down_read_trylock+5]
> RIP: 810aca65 RSP: 8801b57ffbe8 R
, Mike:)
Acked-by: Hillf Danton
ted code, but the change makes it
> more
> robust.
>
> Suggested-by: Michal Hocko
> Signed-off-by: Vlastimil Babka
> ---
Acked-by: Hillf Danton
true. There is no such known context, but let's
> play it safe and make __alloc_pages_direct_compact() robust for cases where
> PF_MEMALLOC is already set.
>
> Fixes: a8161d1ed609 ("mm, page_alloc: restructure direct compaction handling
> in slowpath")
> Reported-
On March 31, 2017 2:49 PM Michal Hocko wrote:
> On Fri 31-03-17 11:49:49, Hillf Danton wrote:
> [...]
> > > -/* Can fail with -ENOMEM from allocating a wait table with vmalloc() or
> > > - * alloc_bootmem_node_nopanic()/memblock_virt_alloc_node_nopanic() */
On March 30, 2017 7:55 PM Michal Hocko wrote:
>
> +static void __meminit resize_zone_range(struct zone *zone, unsigned long
> start_pfn,
> + unsigned long nr_pages)
> +{
> + unsigned long old_end_pfn = zone_end_pfn(zone);
> +
> + if (start_pfn < zone->zone_start_pfn)
> +
On March 30, 2017 7:55 PM Michal Hocko wrote:
>
> From: Michal Hocko
>
> init_currently_empty_zone doesn't have any error to return yet it is
> still an int and callers try to be defensive and try to handle potential
> error. Remove this nonsense and simplify all callers.
>
It is already cut o
On March 30, 2017 7:55 PM Michal Hocko wrote:
>
> @@ -5535,9 +5535,6 @@ int __meminit init_currently_empty_zone(struct zone
> *zone,
> zone_start_pfn, (zone_start_pfn + size));
>
> zone_init_free_lists(zone);
> - zone->initialized = 1;
> -
> - return 0;
> }
On March 28, 2017 1:06 AM Vito Caputo wrote:
>
> The existing path and memory cleanups appear to be in reverse order, and
> there's no iput() potentially leaking the inode in the last two error gotos.
>
> Also make put_memory shmem_unacct_size() conditional on !inode since if we
> entered cleanu
f/0xc2
>
> Analysis provided by Tetsuo Handa
> v2: Remove now redundant initialization in hugetlbfs_get_root
>
> Reported-by: Dmitry Vyukov
> Signed-off-by: Mike Kravetz
> ---
Acked-by: Hillf Danton
tlb_file_setup+0x593/0x9f0 fs/hugetlbfs/inode.c:1306
> > newseg+0x422/0xd30 ipc/shm.c:575
> > ipcget_new ipc/util.c:285 [inline]
> > ipcget+0x21e/0x580 ipc/util.c:639
> > SYSC_shmget ipc/shm.c:673 [inline]
> > SyS_shmget+0x158/0x230 ipc/shm.c:657
> > en
; SyS_shmget+0x158/0x230 ipc/shm.c:657
> entry_SYSCALL_64_fastpath+0x1f/0xc2
> RIP: resv_map_release+0x265/0x330 mm/hugetlb.c:742
>
> Reported-by: Dmitry Vyukov
> Signed-off-by: Mike Kravetz
> ---
Acked-by: Hillf Danton
> mm/hugetlb.c | 4 +++-
> 1 file changed, 3
on-present for hugetlb is
> not correct, because pmd_present() checks multiple bits (not only
> _PAGE_PRESENT) for historical reason and it can misjudge hugetlb state.
>
> Fixes: e66f17ff7177 ("mm/hugetlb: take page table lock in follow_huge_pmd()")
> Signed-off-by: Naoya
On March 21, 2017 5:10 PM Dmitry Vyukov wrote:
>
> @@ -60,15 +60,8 @@ void notrace __sanitizer_cov_trace_pc(void)
> /*
>* We are interested in code coverage as a function of a syscall inputs,
>* so we ignore code executed in interrupts.
> - * The checks for whether we
On March 15, 2017 5:00 PM Aaron Lu wrote:
> void tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned
> long end)
> {
> + struct batch_free_struct *batch_free, *n;
> +
s/*n/*next/
> tlb_flush_mmu(tlb);
>
> /* keep the page table cache within bounds */
>
echanism to kswapd. So, add kswapd_failures check
> on the throttle_direct_reclaim condition.
>
> Signed-off-by: Shakeel Butt
> Suggested-by: Michal Hocko
> Suggested-by: Johannes Weiner
> ---
Acked-by: Hillf Danton
can activate it.
> There is no point to introduce new return value SWAP_DIRTY
> in ttu at the moment.
>
> Acked-by: Kirill A. Shutemov
> Signed-off-by: Minchan Kim
> ---
Acked-by: Hillf Danton
> include/linux/rmap.h | 1 -
> mm/rmap.c| 6 +++---
> mm/vmsca
On March 13, 2017 8:36 AM Minchan Kim wrote:
>
> Anyone doesn't use ret variable. Remove it.
>
> Acked-by: Kirill A. Shutemov
> Signed-off-by: Minchan Kim
> ---
Acked-by: Hillf Danton
> mm/rmap.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
On March 07, 2017 12:24 AM Johannes Weiner wrote:
> On Mon, Mar 06, 2017 at 10:37:40AM +0900, Minchan Kim wrote:
> > On Fri, Mar 03, 2017 at 08:59:54AM +0100, Michal Hocko wrote:
> > > On Fri 03-03-17 10:26:09, Minchan Kim wrote:
> > > > On Tue, Feb 28, 2017 at 04:39:59PM -0500, Johannes Weiner w
On March 03, 2017 5:45 AM Laura Abbott wrote:
>
> +static struct sg_table *dup_sg_table(struct sg_table *table)
> +{
> + struct sg_table *new_table;
> + int ret, i;
> + struct scatterlist *sg, *new_sg;
> +
> + new_table = kzalloc(sizeof(*new_table), GFP_KERNEL);
> + if (!new_
On March 02, 2017 11:11 PM Kirill A. Shutemov wrote:
>
> Basically the same race as with numa balancing in change_huge_pmd(), but
> a bit simpler to mitigate: we don't need to preserve dirty/young flags
> here due to MADV_FREE functionality.
>
> Signed-off-by: Kirill A. Shutemov
> Cc: Minchan
On March 02, 2017 2:39 PM Minchan Kim wrote:
> @@ -1424,7 +1424,8 @@ static int try_to_unmap_one(struct page *page, struct
> vm_area_struct *vma,
> } else if (!PageSwapBacked(page)) {
> /* dirty MADV_FREE page */
Nit: enrich the comment please
g tricks for pages skipped due to zone constraints.
>
> Signed-off-by: Johannes Weiner
> ---
Acked-by: Hillf Danton
ess reclaiming a few pages, the backoff function gets reset also,
> and so is of little help in these scenarios.
>
> We might want a backoff function for when there IS progress, but not
> enough to be satisfactory. But this isn't that. Remove it.
>
> Signed-off-by: Johannes Weiner
> ---
Acked-by: Hillf Danton
> Signed-off-by: Johannes Weiner
> ---
Acked-by: Hillf Danton
any meaningful way.
>
> Remove the counter and the unused pgdat_reclaimable().
>
> Signed-off-by: Johannes Weiner
> ---
Acked-by: Hillf Danton
n.c | 19 +------
> 1 file changed, 5 insertions(+), 14 deletions(-)
>
Acked-by: Hillf Danton
_scan stuff, as well as the ugly multi-pass target
> calculation that it necessitated.
>
> Signed-off-by: Johannes Weiner
> ---
Acked-by: Hillf Danton
s to be a spurious change in this patch as I doubt the
> series was tested with laptop_mode, and neither is that particular
> change mentioned in the changelog. Remove it, it's still recent.
>
> Signed-off-by: Johannes Weiner
> ---
Acked-by: Hillf Danton
cking the same pgdat over and over again doesn't make sense.
>
> Fixes: 599d0c954f91 ("mm, vmscan: move LRU lists to node")
> Signed-off-by: Johannes Weiner
> ---
Acked-by: Hillf Danton
to mm/internal.h (Michal)
>
> Reported-by: Jia He
> Signed-off-by: Johannes Weiner
> Tested-by: Jia He
> Acked-by: Michal Hocko
> ---
Acked-by: Hillf Danton
t; Signed-off-by: Shaohua Li
> ---
Acked-by: Hillf Danton
Gorman
> Cc: Andrew Morton
> Acked-by: Johannes Weiner
> Signed-off-by: Shaohua Li
> ---
Acked-by: Hillf Danton
Michal Hocko
> Cc: Minchan Kim
> Cc: Hugh Dickins
> Cc: Rik van Riel
> Cc: Mel Gorman
> Cc: Andrew Morton
> Suggested-by: Johannes Weiner
> Signed-off-by: Shaohua Li
> ---
Acked-by: Hillf Danton
7;t want to
> reclaim too many MADV_FREE pages before used once pages.
>
> Based on Minchan's original patch
>
> Cc: Michal Hocko
> Cc: Minchan Kim
> Cc: Hugh Dickins
> Cc: Johannes Weiner
> Cc: Rik van Riel
> Cc: Mel Gorman
> Cc: Andrew Morton
> Signed-off-by: Shaohua Li
> ---
Acked-by: Hillf Danton
umption doesn't hold any more, so fix them.
>
> Cc: Michal Hocko
> Cc: Minchan Kim
> Cc: Hugh Dickins
> Cc: Rik van Riel
> Cc: Mel Gorman
> Cc: Andrew Morton
> Acked-by: Johannes Weiner
> Signed-off-by: Shaohua Li
> ---
Acked-by: Hillf Danton
TTU_UNMAP is unnecessary. If no other flags set (for
> example, TTU_MIGRATION), an unmap is implied.
>
> Cc: Michal Hocko
> Cc: Minchan Kim
> Cc: Hugh Dickins
> Cc: Rik van Riel
> Cc: Mel Gorman
> Cc: Andrew Morton
> Suggested-by: Johannes Weiner
> Signed-off-by: Shaohua Li
> ---
Acked-by: Hillf Danton
On February 21, 2017 12:34 AM Vlastimil Babka wrote:
> On 02/16/2017 09:21 AM, Hillf Danton wrote:
> > Right, but the order-3 request can also come up while kswapd is active and
> > gives up order-5.
>
> "Giving up on order-5" means it will set sc.order to 0,
argument passed.
>
> Signed-off-by: Aneesh Kumar K.V
Fix: bae473a423 ("mm: introduce fault_env")
Acked-by: Hillf Danton
> ---
> mm/huge_memory.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 5f3ad65c8
On February 16, 2017 4:11 PM Mel Gorman wrote:
> On Thu, Feb 16, 2017 at 02:23:08PM +0800, Hillf Danton wrote:
> > On February 15, 2017 5:23 PM Mel Gorman wrote:
> > > */
> > > static int kswapd(void *p)
> > > {
> > > - unsigned int alloc_order, r
On February 15, 2017 5:23 PM Mel Gorman wrote:
> */
> static int kswapd(void *p)
> {
> - unsigned int alloc_order, reclaim_order, classzone_idx;
> + unsigned int alloc_order, reclaim_order;
> + unsigned int classzone_idx = MAX_NR_ZONES - 1;
> pg_data_t *pgdat = (pg_data_t*)p;
> This patch is included with the data in case a bisection leads to this area.
> This patch is also a pre-requisite for the rest of the series.
>
> Signed-off-by: Shantanu Goel
> Signed-off-by: Mel Gorman
> ---
Acked-by: Hillf Danton
> mm/vmscan.c | 6 +++---
> 1 file chan
On February 04, 2017 2:38 PM Hillf Danton wrote:
>
> On February 04, 2017 7:33 AM Shaohua Li wrote:
> > @@ -1404,6 +1401,8 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb,
> > struct vm_area_struct *vma,
> > set_pmd_at
equests.patch
>
> Signed-off-by: Mel Gorman
> ---
Thanks for fixing it.
Acked-by: Hillf Danton
> mm/page_alloc.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index eaecb4b145e6..2a36dad03dac 100644
> -
On February 06, 2017 12:13 AM Zi Yan wrote:
>
> @@ -1233,33 +1233,31 @@ static inline unsigned long zap_pmd_range(struct
> mmu_gather *tlb,
> struct zap_details *details)
> {
> pmd_t *pmd;
> + spinlock_t *ptl;
> unsigned long next;
>
> pmd =
On February 04, 2017 7:33 AM Shaohua Li wrote:
> @@ -1404,6 +1401,8 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb,
> struct vm_area_struct *vma,
> set_pmd_at(mm, addr, pmd, orig_pmd);
> tlb_remove_pmd_tlb_entry(tlb, pmd, addr);
> }
> +
> + mark_page_la
On February 04, 2017 4:32 AM Mel Gorman wrote:
>
> Hillf Danton pointed out that since commit 1d82de618dd ("mm, vmscan:
> make kswapd reclaim in terms of nodes") that PGDAT_WRITEBACK is no longer
> cleared. It was not noticed as triggering it requires pages under writ
On February 03, 2017 3:20 AM Johannes Weiner wrote:
> @@ -1063,7 +1063,7 @@ static unsigned long shrink_page_list(struct list_head
> *page_list,
> PageReclaim(page) &&
> test_bit(PGDAT_WRITEBACK, &pgdat->flags)) {
>
[inline]
> khugepaged+0xe9b/0x1590 mm/khugepaged.c:1853
> kthread+0x326/0x3f0 kernel/kthread.c:227
> ret_from_fork+0x31/0x40 arch/x86/entry/entry_64.S:430
>
> The iput() from atomic context was a bad idea: if after igrab() somebody
> else calls iput() and we left with the last inode reference, our iput()
> would lead to inode eviction and therefore sleeping.
>
> This patch should fix the situation.
>
> Signed-off-by: Kirill A. Shutemov
> Reported-by: Dmitry Vyukov
> ---
Acked-by: Hillf Danton
On February 01, 2017 3:54 AM Zi Yan wrote:
>
> I am also doing some tests on THP migration and discover that there are
> some corner cases not handled in this patchset.
>
> For example, in handle_mm_fault, without taking pmd_lock, the kernel may
> see pmd_none(*pmd) during THP migrations, which
On January 26, 2017 2:26 AM Kirill A. Shutemov wrote:
>
> For consistency, it worth converting all page_check_address() to
> page_vma_mapped_walk(), so we could drop the former.
>
> Signed-off-by: Kirill A. Shutemov
> ---
Acked-by: Hillf Danton
> mm/p
fix is added for start if unlikely it goes outside range, and
its currently relevant debugging is cut off.
Other than that,
Acked-by: Hillf Danton
| 23 ++
> mm/userfaultfd.c | 42 ++-
> mm/util.c| 5 ++-
> 15 files changed, 215 insertions(+), 68 deletions(-)
>
> --
Acked-by: Hillf Danton
ude/trace/events/writeback.h | 2 +-
> mm/swap.c| 9 ++---
> mm/vmscan.c | 68 +++---
> 6 files changed, 41 insertions(+), 49 deletions(-)
>
Acked-by: Hillf Danton
eing complete when the function returns.
>
> Signed-off-by: Mel Gorman
> ---
Acked-by: Hillf Danton
On Wednesday, January 25, 2017 4:00 PM Michal Hocko wrote:
> On Wed 25-01-17 15:00:51, Hillf Danton wrote:
> > On Tuesday, January 24, 2017 8:41 PM Michal Hocko wrote:
> > > On Fri 20-01-17 16:33:36, Hillf Danton wrote:
> > > >
> > > > On Tuesday, De
On Tuesday, January 24, 2017 8:41 PM Michal Hocko wrote:
> On Fri 20-01-17 16:33:36, Hillf Danton wrote:
> >
> > On Tuesday, December 20, 2016 9:49 PM Michal Hocko wrote:
> > >
> > > @@ -1013,7 +1013,7 @@ bool out_of_memory(struct oom_control *oc)
> > >
On Friday, January 20, 2017 6:39 PM Vlastimil Babka wrote:
>
> Changes since v1:
> - add/remove comments per Michal Hocko and Hillf Danton
> - move no_zone: label in patch 3 so we don't miss part of ac initialization
>
> This is v2 of my attempt to fix the recent rep
On Tuesday, December 20, 2016 9:49 PM Michal Hocko wrote:
>
> @@ -1013,7 +1013,7 @@ bool out_of_memory(struct oom_control *oc)
>* make sure exclude 0 mask - all other users should have at least
>* ___GFP_DIRECT_RECLAIM to get here.
>*/
> - if (oc->gfp_mask && !(oc->gf
This provides better debugging output since
> cpuset_print_current_mems_allowed() is already provided.
>
> Suggested-by: Vlastimil Babka
> Signed-off-by: David Rientjes
> ---
Acked-by: Hillf Danton
> mm/oom_kill.c | 16 +---
> 1 file changed, 9 insertions(+), 7 deletions(-)
>
> diff
On Thursday, January 19, 2017 6:08 PM Mel Gorman wrote:
>
> If it's definitely required and is proven to fix the
> infinite-loop-without-oom workload then I'll back off and withdraw my
> objections. However, I'd at least like the following untested patch to
> be considered as an alternative. It
On Wednesday, January 18, 2017 6:16 AM Vlastimil Babka wrote:
>
> This is a preparation for the following patch to make review simpler. While
> the primary motivation is a bug fix, this could also save some cycles in the
> fast path.
>
This also gets kswapd involved.
Dunno how frequent cpuset i
On Wednesday, January 18, 2017 6:16 AM Vlastimil Babka wrote:
>
> @@ -3802,13 +3811,8 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int
> order,
>* Also recalculate the starting point for the zonelist iterator or
>* we could end up iterating over non-eligible zones endlessl
as well - Vlastimil
>
> Acked-by: Mel Gorman
> Signed-off-by: Michal Hocko
> ---
Acked-by: Hillf Danton
tside of the allocating task numa
> policy. Add this check to not pollute the output with the pointless
> information.
>
> Acked-by: Mel Gorman
> Acked-by: Johannes Weiner
> Signed-off-by: Michal Hocko
> ---
Acked-by: Hillf Danton
> mm/page_alloc.c | 3 +++
>
n: consider
> eligible zones in get_scan_count").
>
Looks radical ;)
> Signed-off-by: Michal Hocko
> ---
Acked-by: Hillf Danton
> mm/vmscan.c | 27 ---
> 1 file changed, 27 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
>
m zone the worse the problem will
> be.
>
> Fix this by filtering out all the ineligible zones when calculating the
> lru size for both paths and consider only sc->reclaim_idx zones.
>
> Acked-by: Minchan Kim
> Signed-off-by: Michal Hocko
> ---
Acked-by: Hillf Danton
r patches will add more of
> them. Add a new parameter to lruvec_lru_size and allow it filter out
> zones which are not eligible for the given context.
>
> Acked-by: Johannes Weiner
> Signed-off-by: Michal Hocko
> ---
Acked-by: Hillf Danton
-0 pages has to do the preparation steps multiple times. This patch
> structures __alloc_pages_nodemask such that it's relatively easy to build
> a bulk order-0 page allocator. There is no functional change.
>
> Signed-off-by: Mel Gorman
> ---
Acked-by: Hillf Danton
mes. This patch structures buffere_rmqueue such that it's relatively
> easy to build a bulk order-0 page allocator. There is no functional
> change.
>
> Signed-off-by: Mel Gorman
> ---
Acked-by: Hillf Danton
.71%)
> Ameantotal-odr0-16384 268.00 ( 0.00%) 186.46 (
> 30.42%)
>
> It shows a roughly 50-60% reduction in the cost of allocating pages.
> The free paths are not improved as much but relatively little can be batched
> there. It's not quite as
On Monday, January 09, 2017 5:48 PM Mel Gorman wrote:
> On Mon, Jan 09, 2017 at 11:14:29AM +0800, Hillf Danton wrote:
> > > On Friday, January 06, 2017 6:16 PM Mel Gorman wrote:
> > >
> > > On Fri, Jan 06, 2017 at 11:26:46AM +0800, Hillf Danton wrote:
> > > &
> Sent: Friday, January 06, 2017 6:16 PM Mel Gorman wrote:
>
> On Fri, Jan 06, 2017 at 11:26:46AM +0800, Hillf Danton wrote:
> >
> > On Wednesday, January 04, 2017 7:11 PM Mel Gorman wrote:
> > > @@ -2647,9 +2644,8 @@ static struct page *rmqueue_pcplist(stru
On Wednesday, January 04, 2017 7:11 PM Mel Gorman wrote:
> @@ -2647,9 +2644,8 @@ static struct page *rmqueue_pcplist(struct zone
> *preferred_zone,
> struct list_head *list;
> bool cold = ((gfp_flags & __GFP_COLD) != 0);
> struct page *page;
> - unsigned long flags;
>
> -
On Friday, December 30, 2016 5:27 PM Michal Hocko wrote:
> Anyway, what do you think about this updated patch? I have kept Hillf's
> A-b so please let me know if it is no longer valid.
>
My mind is not changed:)
Happy new year folks!
Hillf
ard to diagnose
> active/inactive lists balancing. Add mm_vmscan_inactive_list_is_low
> tracepoint to tell us this information.
>
> Signed-off-by: Michal Hocko
> ---
Acked-by: Hillf Danton
> All these are rather low level so they might change in future but the
> tracepoint is already implementation specific so no tools should be
> depending on its stability.
>
> Signed-off-by: Michal Hocko
> ---
Acked-by: Hillf Danton
functional change.
>
> Signed-off-by: Michal Hocko
> ---
Acked-by: Hillf Danton
onymous as well. Change
> the tracepoint to show symbolic names of the lru rather.
>
> Signed-off-by: Michal Hocko
> ---
Acked-by: Hillf Danton
>
> Signed-off-by: Michal Hocko
> ---
Acked-by: Hillf Danton
nk_active which reports
> the number of scanned, rotated, deactivated and freed pages from the
> particular node's active list.
>
> Signed-off-by: Michal Hocko
> ---
Acked-by: Hillf Danton
On Wednesday, December 28, 2016 11:30 PM Michal Hocko wrote:
> From: Michal Hocko
>
> the trace point is not used since 925b7673cce3 ("mm: make per-memcg LRU
> lists exclusive") so it can be removed.
>
> Signed-off-by: Michal Hocko
> ---
Acked-by: Hillf Danton
goto nopage;
> - }
>
> /* Avoid allocations with no watermarks from looping endlessly */
> - if (test_thread_flag(TIF_MEMDIE) && !(gfp_mask & __GFP_NOFAIL))
> + if (test_thread_flag(TIF_MEMDIE))
> goto nopage;
>
Nit: curre
pre-LRU pages in
> the specified fadvise range.
>
> Signed-off-by: Johannes Weiner
> Acked-by: Vlastimil Babka
> Acked-by: Mel Gorman
> ---
Acked-by: Hillf Danton
> mm/fadvise.c | 15 ++-
> 1 file changed, 14 insertions(+), 1 deletion(-)
>
> diff --git
bad page warning followed by a soft lockup
> with interrupts disabled in free_pcppages_bulk().
>
> This patch keeps the accounting in sync.
>
> Fixes: 479f854a207c ("mm, page_alloc: defer debugging checks of pages
> allocated from the PCP")
> Signed-off-by
On Friday, December 02, 2016 2:19 PM Vlastimil Babka wrote:
> On 12/02/2016 04:47 AM, Hillf Danton wrote:
> > On Friday, December 02, 2016 8:23 AM Mel Gorman wrote:
> >> Vlastimil Babka pointed out that commit 479f854a207c ("mm, page_alloc:
> >> defer debugging ch
On Friday, December 02, 2016 8:23 AM Mel Gorman wrote:
> Vlastimil Babka pointed out that commit 479f854a207c ("mm, page_alloc:
> defer debugging checks of pages allocated from the PCP") will allow the
> per-cpu list counter to be out of sync with the per-cpu list contents
> if a struct page is cor
1 - 100 of 544 matches
Mail list logo