On Tue, Dec 06, 2016 at 11:43:45AM +0900, Joonsoo Kim wrote:
> > actually clear at all it's an unfair situation, particularly given that the
> > vanilla code is also unfair -- the vanilla code can artifically preserve
> > MIGRATE_UNMOVABLE without any clear indication that it is a universal win.
>
On Mon, Dec 05, 2016 at 09:57:39AM +, Mel Gorman wrote:
> On Mon, Dec 05, 2016 at 12:06:19PM +0900, Joonsoo Kim wrote:
> > On Fri, Dec 02, 2016 at 09:04:49AM +, Mel Gorman wrote:
> > > On Fri, Dec 02, 2016 at 03:03:46PM +0900, Joonsoo Kim wrote:
> > > > > @@ -1132,14 +1134,17 @@ static void
On Mon, Dec 05, 2016 at 12:06:19PM +0900, Joonsoo Kim wrote:
> On Fri, Dec 02, 2016 at 09:04:49AM +, Mel Gorman wrote:
> > On Fri, Dec 02, 2016 at 03:03:46PM +0900, Joonsoo Kim wrote:
> > > > @@ -1132,14 +1134,17 @@ static void free_pcppages_bulk(struct zone
> > > > *zone, int count,
> > > >
On Fri, Dec 02, 2016 at 09:21:08AM +0100, Michal Hocko wrote:
> On Fri 02-12-16 15:03:46, Joonsoo Kim wrote:
> [...]
> > > o pcp accounting during free is now confined to free_pcppages_bulk as it's
> > > impossible for the caller to know exactly how many pages were freed.
> > > Due to the high-
On Fri, Dec 02, 2016 at 09:04:49AM +, Mel Gorman wrote:
> On Fri, Dec 02, 2016 at 03:03:46PM +0900, Joonsoo Kim wrote:
> > > @@ -1132,14 +1134,17 @@ static void free_pcppages_bulk(struct zone *zone,
> > > int count,
> > > if (unlikely(isolated_pageblocks))
> > >
On Fri, Dec 02, 2016 at 03:03:46PM +0900, Joonsoo Kim wrote:
> > @@ -1132,14 +1134,17 @@ static void free_pcppages_bulk(struct zone *zone,
> > int count,
> > if (unlikely(isolated_pageblocks))
> > mt = get_pageblock_migratetype(page);
> >
> > +
On Fri 02-12-16 00:22:44, Mel Gorman wrote:
> Changelog since v4
> o Avoid pcp->count getting out of sync if struct page gets corrupted
>
> Changelog since v3
> o Allow high-order atomic allocations to use reserves
>
> Changelog since v2
> o Correct initialisation to avoid -Woverflow warning
>
>
On Fri 02-12-16 15:03:46, Joonsoo Kim wrote:
[...]
> > o pcp accounting during free is now confined to free_pcppages_bulk as it's
> > impossible for the caller to know exactly how many pages were freed.
> > Due to the high-order caches, the number of pages drained for a request
> > is no long
Hello, Mel.
I didn't follow up previous discussion so what I raise here would be
duplicated. Please let me know the link if it is answered before.
On Fri, Dec 02, 2016 at 12:22:44AM +, Mel Gorman wrote:
> Changelog since v4
> o Avoid pcp->count getting out of sync if struct page gets corrupte
9 matches
Mail list logo