On 12/01/2016 03:24 PM, Mel Gorman wrote:
On Thu, Dec 01, 2016 at 02:41:29PM +0100, Vlastimil Babka wrote:
On 12/01/2016 01:24 AM, Mel Gorman wrote:
...
Hmm I think that if this hits, we don't decrease count/increase nr_freed and
pcp->count will become wrong.
Ok, I think you're right but I
On Thu, Dec 01, 2016 at 02:41:29PM +0100, Vlastimil Babka wrote:
> On 12/01/2016 01:24 AM, Mel Gorman wrote:
>
> ...
>
> > @@ -1096,28 +1097,29 @@ static void free_pcppages_bulk(struct zone *zone,
> > int count,
> > if (nr_scanned)
> > __mod_node_page_state(zone->zone_pgdat, NR_P
On Thu 01-12-16 00:24:40, Mel Gorman wrote:
> Changelog since v3
> o Allow high-order atomic allocations to use reserves
>
> Changelog since v2
> o Correct initialisation to avoid -Woverflow warning
>
> SLUB has been the default small kernel object allocator for quite some time
> but it is not un
On 12/01/2016 01:24 AM, Mel Gorman wrote:
...
@@ -1096,28 +1097,29 @@ static void free_pcppages_bulk(struct zone *zone, int
count,
if (nr_scanned)
__mod_node_page_state(zone->zone_pgdat, NR_PAGES_SCANNED,
-nr_scanned);
- while (count) {
+ while (count > 0)
Changelog since v3
o Allow high-order atomic allocations to use reserves
Changelog since v2
o Correct initialisation to avoid -Woverflow warning
SLUB has been the default small kernel object allocator for quite some time
but it is not universally used due to performance concerns and a reliance
on
5 matches
Mail list logo