On 10/18/19 4:15 PM, Michal Hocko wrote:
> It's been some time since I've posted these results. The hugetlb issue
> got resolved but I would still like to hear back about these findings
> because they suggest that the current bail out strategy doesn't seem
> to produce very good results. Essentiall
It's been some time since I've posted these results. The hugetlb issue
got resolved but I would still like to hear back about these findings
because they suggest that the current bail out strategy doesn't seem
to produce very good results. Essentially it doesn't really help THP
locality (on moderat
On Thu 03-10-19 10:00:08, Vlastimil Babka wrote:
> On 10/3/19 12:32 AM, David Rientjes wrote:
> > On Wed, 2 Oct 2019, Michal Hocko wrote:
> >
> If
> hugetlb wants to stress this to the fullest extent possible, it already
> appropriately uses __GFP_RETRY_MAYFAIL.
> >>>
> >>> Which
On 10/3/19 12:32 AM, David Rientjes wrote:
> On Wed, 2 Oct 2019, Michal Hocko wrote:
>
If
hugetlb wants to stress this to the fullest extent possible, it already
appropriately uses __GFP_RETRY_MAYFAIL.
>>>
>>> Which doesn't work anymore right now, and should again after this patch
On Wed, 2 Oct 2019, Michal Hocko wrote:
> > > If
> > > hugetlb wants to stress this to the fullest extent possible, it already
> > > appropriately uses __GFP_RETRY_MAYFAIL.
> >
> > Which doesn't work anymore right now, and should again after this patch.
>
> I didn't get to fully digest the pat
On Tue 01-10-19 23:54:14, Vlastimil Babka wrote:
> On 10/1/19 10:31 PM, David Rientjes wrote:
[...]
> > If
> > hugetlb wants to stress this to the fullest extent possible, it already
> > appropriately uses __GFP_RETRY_MAYFAIL.
>
> Which doesn't work anymore right now, and should again after this
On 10/1/19 10:31 PM, David Rientjes wrote:
> On Tue, 1 Oct 2019, Vlastimil Babka wrote:
>
>> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
>> index 4ae967bcf954..2c48146f3ee2 100644
>> --- a/mm/mempolicy.c
>> +++ b/mm/mempolicy.c
>> @@ -2129,18 +2129,20 @@ alloc_pages_vma(gfp_t gfp, int order, stru
On Tue, 1 Oct 2019, Vlastimil Babka wrote:
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 4ae967bcf954..2c48146f3ee2 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -2129,18 +2129,20 @@ alloc_pages_vma(gfp_t gfp, int order, struct
> vm_area_struct *vma,
> nmask = p
On 10/1/19 7:43 AM, Michal Hocko wrote:
> so we do not get more that 12 huge pages which is really poor. Although
> hugetlb pages tend to be allocated early after the boot they are still
> an explicit admin request and having less than 5% success rate is really
> bad. If anything the __GFP_RETRY_MA
On Tue 01-10-19 07:43:43, Michal Hocko wrote:
[...]
> I also didn't really get to test any NUMA aspect of the change yet. I
> still do hope that David can share something I can play with
> because I do not want to create something completely artificial.
I have split out my kvm machine into two nod
On Mon 30-09-19 13:28:17, Michal Hocko wrote:
[...]
> Do not get me wrong, but we have a quite a long history of fine tuning
> for THP by adding kludges here and there and they usually turnout to
> break something else. I really want to get to understand the underlying
> problem and base a solution
On Sat 28-09-19 13:59:26, Linus Torvalds wrote:
> On Fri, Sep 27, 2019 at 12:48 AM Michal Hocko wrote:
> >
> > - page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);
> > + if (!order)
> > + page = get_page_from_freelist(gfp_mask, order, alloc_flags,
> > ac);
On Fri, Sep 27, 2019 at 12:48 AM Michal Hocko wrote:
>
> - page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);
> + if (!order)
> + page = get_page_from_freelist(gfp_mask, order, alloc_flags,
> ac);
> if (page)
> goto got_pg;
>
> The w
On Thu 26-09-19 12:03:37, David Rientjes wrote:
[...]
> Your patch is setting __GFP_THISNODE for __GFP_DIRECT_RECLAIM: this
> allocation will fail in the fastpath for both my case (fragmented local
> node) and Andrea's case (out of memory local node). The first
> get_page_from_freelist() will t
On Wed, 25 Sep 2019, Michal Hocko wrote:
> I am especially interested about this part. The more I think about this
> the more I am convinced that the underlying problem really is in the pre
> mature fallback in the fast path.
I appreciate you taking the time to continue to look at this but I'm
c
Let me revive this thread as there was no follow up.
On Mon 09-09-19 21:30:20, Michal Hocko wrote:
[...]
> I believe it would be the best to start by explaining why we do not see
> the same problem with order-0 requests. We do not enter the slow path
> and thus the memory reclaim if there is any o
On Thu 05-09-19 14:06:28, David Rientjes wrote:
> On Wed, 4 Sep 2019, Andrea Arcangeli wrote:
>
> > > This is an admittedly hacky solution that shouldn't cause anybody to
> > > regress based on NUMA and the semantics of MADV_HUGEPAGE for the past
> > > 4 1/2 years for users whose workload does f
On Sun 08-09-19 13:45:13, David Rientjes wrote:
> If the reverts to 5.3 are not
> applied, then I'm not at all confident that forward progress on this issue
> will be made:
David, could you stop this finally? I think there is a good consensus
that the current (even after reverts) behavior is not
On Sun, 8 Sep 2019, Vlastimil Babka wrote:
> > On Sat, 7 Sep 2019, Linus Torvalds wrote:
> >
> >>> Andrea acknowledges the swap storm that he reported would be fixed with
> >>> the last two patches in this series
> >>
> >> The problem is that even you aren't arguing that those patches should
> >>
On 9/8/19 3:50 AM, David Rientjes wrote:
> On Sat, 7 Sep 2019, Linus Torvalds wrote:
>
>>> Andrea acknowledges the swap storm that he reported would be fixed with
>>> the last two patches in this series
>>
>> The problem is that even you aren't arguing that those patches should
>> go into 5.3.
>>
On Sat, 7 Sep 2019, Linus Torvalds wrote:
> > Andrea acknowledges the swap storm that he reported would be fixed with
> > the last two patches in this series
>
> The problem is that even you aren't arguing that those patches should
> go into 5.3.
>
For three reasons: (a) we lack a test result f
On Sat, Sep 7, 2019 at 12:51 PM David Rientjes wrote:
>
> Andrea acknowledges the swap storm that he reported would be fixed with
> the last two patches in this series
The problem is that even you aren't arguing that those patches should
go into 5.3.
So those fixes aren't going in, so "the swap
Is there any objection from anybody to applying the first two patches, the
reverts of the reverts that went into 5.3-rc5, for 5.3 and pursuing
discussion and development using the last two patches in this series as a
starting point for a sane allocation policy that just works by default for
eve
On Wed, 4 Sep 2019, Andrea Arcangeli wrote:
> > This is an admittedly hacky solution that shouldn't cause anybody to
> > regress based on NUMA and the semantics of MADV_HUGEPAGE for the past
> > 4 1/2 years for users whose workload does fit within a socket.
>
> How can you live with the below i
On Wed, 4 Sep 2019, Linus Torvalds wrote:
> > This series reverts those reverts and attempts to propose a more sane
> > default allocation strategy specifically for hugepages. Andrea
> > acknowledges this is likely to fix the swap storms that he originally
> > reported that resulted in the patche
On Wed, Sep 04, 2019 at 12:54:15PM -0700, David Rientjes wrote:
> Two commits:
>
> commit a8282608c88e08b1782141026eab61204c1e533f
> Author: Andrea Arcangeli
> Date: Tue Aug 13 15:37:53 2019 -0700
>
> Revert "mm, thp: restore node-local hugepage allocations"
>
> commit 92717d429b38e4f9f93
On Wed, Sep 4, 2019 at 12:54 PM David Rientjes wrote:
>
> This series reverts those reverts and attempts to propose a more sane
> default allocation strategy specifically for hugepages. Andrea
> acknowledges this is likely to fix the swap storms that he originally
> reported that resulted in the
Two commits:
commit a8282608c88e08b1782141026eab61204c1e533f
Author: Andrea Arcangeli
Date: Tue Aug 13 15:37:53 2019 -0700
Revert "mm, thp: restore node-local hugepage allocations"
commit 92717d429b38e4f9f934eed7e605cc42858f1839
Author: Andrea Arcangeli
Date: Tue Aug 13 15:37:50 2019 -
28 matches
Mail list logo