On Thu, Mar 28, 2019 at 02:59:30PM -0700, Yang Shi wrote:
> Yes, it still could fail. I can't tell which way is better for now. I
> just thought scanning another round then migrating should be still
> faster than swapping off the top of my head.
I think it depends on the relative capacities betw
On 3/27/19 6:08 AM, Keith Busch wrote:
On Tue, Mar 26, 2019 at 08:41:15PM -0700, Yang Shi wrote:
On 3/26/19 5:35 PM, Keith Busch wrote:
migration nodes have higher free capacity than source nodes. And since
your attempting THP's without ever splitting them, that also requires
lower fragmenta
On 27 Mar 2019, at 11:00, Dave Hansen wrote:
> On 3/27/19 10:48 AM, Zi Yan wrote:
>> For 40MB/s vs 750MB/s, they were using sys_migrate_pages(). Sorry
>> about the confusion there. As I measure only the migrate_pages() in
>> the kernel, the throughput becomes: migrating 4KB page: 0.312GB/s
>> vs m
On 3/27/19 1:37 PM, Zi Yan wrote:
> Actually, the migration throughput difference does not come from
> any kernel changes, it is a pure comparison between
> migrate_pages(single 4KB page) and migrate_pages(a list of 4KB
> pages). The point I wanted to make is that Yang’s approach, which
> migrates
On 3/27/19 10:48 AM, Zi Yan wrote:
> For 40MB/s vs 750MB/s, they were using sys_migrate_pages(). Sorry
> about the confusion there. As I measure only the migrate_pages() in
> the kernel, the throughput becomes: migrating 4KB page: 0.312GB/s
> vs migrating 512 4KB pages: 0.854GB/s. They are still >2
On 27 Mar 2019, at 10:05, Dave Hansen wrote:
> On 3/27/19 10:00 AM, Zi Yan wrote:
>> I ask this because I observe that migrating a list of pages can
>> achieve higher throughput compared to migrating individual page.
>> For example, migrating 512 4KB pages can achieve ~750MB/s
>> throughput, where
On 3/27/19 10:00 AM, Zi Yan wrote:
> I ask this because I observe that migrating a list of pages can
> achieve higher throughput compared to migrating individual page.
> For example, migrating 512 4KB pages can achieve ~750MB/s
> throughput, whereas migrating one 4KB page might only achieve
> ~40MB
On 27 Mar 2019, at 6:08, Keith Busch wrote:
> On Tue, Mar 26, 2019 at 08:41:15PM -0700, Yang Shi wrote:
>> On 3/26/19 5:35 PM, Keith Busch wrote:
>>> migration nodes have higher free capacity than source nodes. And since
>>> your attempting THP's without ever splitting them, that also requires
>>>
On Tue, Mar 26, 2019 at 08:41:15PM -0700, Yang Shi wrote:
> On 3/26/19 5:35 PM, Keith Busch wrote:
> > migration nodes have higher free capacity than source nodes. And since
> > your attempting THP's without ever splitting them, that also requires
> > lower fragmentation for a successful migration.
On 3/26/19 5:35 PM, Keith Busch wrote:
On Mon, Mar 25, 2019 at 12:49:21PM -0700, Yang Shi wrote:
On 3/24/19 3:20 PM, Keith Busch wrote:
How do these pages eventually get to swap when migration fails? Looks
like that's skipped.
Yes, they will be just put back to LRU. Actually, I don't expect
On Mon, Mar 25, 2019 at 12:49:21PM -0700, Yang Shi wrote:
> On 3/24/19 3:20 PM, Keith Busch wrote:
> > How do these pages eventually get to swap when migration fails? Looks
> > like that's skipped.
>
> Yes, they will be just put back to LRU. Actually, I don't expect it would be
> very often to hav
On 3/22/19 11:03 PM, Zi Yan wrote:
On 22 Mar 2019, at 21:44, Yang Shi wrote:
Since PMEM provides larger capacity than DRAM and has much lower
access latency than disk, so it is a good choice to use as a middle
tier between DRAM and disk in page reclaim path.
With PMEM nodes, the demotion pa
On 3/24/19 3:20 PM, Keith Busch wrote:
On Sat, Mar 23, 2019 at 12:44:31PM +0800, Yang Shi wrote:
/*
+* Demote DRAM pages regardless the mempolicy.
+* Demot anonymous pages only for now and skip MADV_FREE
+* pages.
+
On Sat, Mar 23, 2019 at 12:44:31PM +0800, Yang Shi wrote:
> /*
> + * Demote DRAM pages regardless the mempolicy.
> + * Demot anonymous pages only for now and skip MADV_FREE
> + * pages.
> + */
> + if (PageAnon(page) && !P
On 22 Mar 2019, at 21:44, Yang Shi wrote:
> Since PMEM provides larger capacity than DRAM and has much lower
> access latency than disk, so it is a good choice to use as a middle
> tier between DRAM and disk in page reclaim path.
>
> With PMEM nodes, the demotion path of anonymous pages could be:
Since PMEM provides larger capacity than DRAM and has much lower
access latency than disk, so it is a good choice to use as a middle
tier between DRAM and disk in page reclaim path.
With PMEM nodes, the demotion path of anonymous pages could be:
DRAM -> PMEM -> swap device
This patch demotes ano
16 matches
Mail list logo