vious
problem but simple fix like this but not sure so let's pass the decision
to him and will learn a lesson from him and will follow the decision
from now on.
Thanks.
Acked-by: Minchan Kim
>
> Acked-by: Zhang Yanfei
> Reviewed-by: Michal Nazarewicz
> Reviewed-by: Anees
*/
> - mutex_unlock(&cma->lock);
> + spin_unlock(&cma->lock);
>
> pfn = cma->base_pfn + (bitmapno << cma->order_per_bit);
> mutex_lock(&cma_mutex);
> --
> 1.7.9.5
>
> --
> To unsubscribe,
7 @@ int __init cma_declare_contiguous(phys_addr_t size,
>* Each reserved area must be initialised later, when more kernel
>* subsystems (like slab allocator) are available.
>*/
> + cma = &cma_areas[cma_area_count];
> cma->base_pfn = PFN_DOWN(base);
> cma->count = size >> PAGE_SHIFT;
> cma->order_per_bit = order_per_bit;
> --
> 1.7.9.5
--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, Jun 12, 2014 at 12:21:45PM +0900, Joonsoo Kim wrote:
> We can remove one call sites for clear_cma_bitmap() if we first
> call it before checking error number.
>
> Signed-off-by: Joonsoo Kim
Acked-by: Minchan Kim
--
Kind regards,
Minchan Kim
--
To unsubscribe from this li
here is no functional change in DMA APIs.
>
> v2: There is no big change from v1 in mm/cma.c. Mostly renaming.
>
> Acked-by: Michal Nazarewicz
> Signed-off-by: Joonsoo Kim
Acutally, I want to remove bool return of cma_release but it's not
a out of scope in this patchs
.
> So support arbitrary bitmap granularity for following generalization.
>
> Signed-off-by: Joonsoo Kim
Acked-by: Minchan Kim
Just a nit below.
>
> diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
> index bc4c171..9bc9340 100644
> --- a/drivers/ba
On Thu, Jun 12, 2014 at 03:43:55PM +0900, Joonsoo Kim wrote:
> On Thu, Jun 12, 2014 at 03:06:10PM +0900, Minchan Kim wrote:
> > On Thu, Jun 12, 2014 at 12:21:42PM +0900, Joonsoo Kim wrote:
> > > ppc kvm's cma region management requires arbitrary bitmap granularity,
> >
gt;
> - pfn = cma->base_pfn + pageno;
> + pfn = cma->base_pfn + (bitmapno << cma->order_per_bit);
> mutex_lock(&cma_mutex);
> ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA);
> mutex_unlock(&cma_mutex);
> @@ -392,7 +421,7 @@ static struct page *__dma_alloc_from_contiguous(struct
> cma *cma, int count,
> pr_debug("%s(): memory range at %p is busy, retrying\n",
>__func__, pfn_to_page(pfn));
> /* try again with a bit different memory target */
> - start = pageno + mask + 1;
> + start = bitmapno + mask + 1;
> }
>
> pr_debug("%s(): returned %p\n", __func__, page);
> --
> 1.7.9.5
--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
uous_reserve_area(phys_addr_t size,
> phys_addr_t base,
> {
> int ret;
>
> - ret = __dma_contiguous_reserve_area(size, base, limit, res_cma, fixed);
> + ret = __dma_contiguous_reserve_area(size, base, limit, 0,
> + res_cma, fixed);
> if (ret)
> return ret;
>
> --
> 1.7.9.5
--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
t; if (!cma || !pages)
> @@ -410,3 +430,11 @@ bool dma_release_from_contiguous(struct device *dev,
> struct page *pages,
>
> return true;
> }
> +
Ditto.
> +bool dma_release_from_contiguous(struct device *dev, struct page *pages,
> + int count)
> +{
> + struct cma *cma = dev_get_cma_area(dev);
> +
> + return __dma_release_from_contiguous(cma, pages, count);
> +}
> --
> 1.7.9.5
--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
} while (--i);
>
> mutex_init(&cma->lock);
> return 0;
> +
> +err:
> + kfree(cma->bitmap);
> + return -EINVAL;
> }
>
> static struct cma cma_areas[MAX_CMA_AREAS];
> --
> 1.7.9.5
--
Kind regards,
Minchan Kim
--
To un
%ld MiB at %08lx\n",
> + __func__, (unsigned long)size / SZ_1M, (unsigned long)base);
>
> /* Architecture specific contiguous memory fixup. */
> dma_contiguous_early_fixup(base, size);
> return 0;
> err:
> - pr_err("CMA: failed to
;
> Signed-off-by: Mel Gorman
> Tested-by: Tony Prisk
Acked-by: Minchan Kim
--
Kind Regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
= pageblock_nr_pages)
> > > > > {
> > > > > + struct page *page;
> > > > > + if (!pfn_valid(pfn))
> > > > > + continue;
> > > > > +
> > > > > + page = pf
On Tue, Sep 25, 2012 at 02:39:31PM -0700, Andrew Morton wrote:
> On Tue, 25 Sep 2012 17:13:27 +0900
> Minchan Kim wrote:
>
> > I see. To me, your saying is better than current comment.
> > I hope comment could be more explicit.
> >
> > diff --git a/mm/compact
On Tue, Sep 25, 2012 at 08:51:05AM +0100, Mel Gorman wrote:
> On Tue, Sep 25, 2012 at 04:05:17PM +0900, Minchan Kim wrote:
> > Hi Mel,
> >
> > I have a question below.
> >
> > On Fri, Sep 21, 2012 at 11:46:19AM +0100, Mel Gorman wrote:
> > > Com
nners.
>
> Signed-off-by: Mel Gorman
> Acked-by: Rik van Riel
Acked-by: Minchan Kim
--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
pfn_valid_within()?
> >
>
> Comment is stale and should be removed.
>
> > > @@ -218,13 +272,18 @@ static unsigned long
> > > isolate_freepages_block(unsigned long blockpfn,
> > > unsigned long
> > > isolate_freepages_range(unsigned long start_pfn, unsigned long end_pfn)
> > > {
> > > - unsigned long isolated, pfn, block_end_pfn, flags;
> > > + unsigned long isolated, pfn, block_end_pfn;
> > > struct zone *zone = NULL;
> > > LIST_HEAD(freelist);
> > > + struct compact_control cc;
> > >
> > > if (pfn_valid(start_pfn))
> > > zone = page_zone(pfn_to_page(start_pfn));
> > >
> > > + /* cc needed for isolate_freepages_block to acquire zone->lock */
> > > + cc.zone = zone;
> > > + cc.sync = true;
> >
> > We initialise two of cc's fields, leave the other 12 fields containing
> > random garbage, then start using it. I see no bug here, but...
> >
>
> I get your point. At the very least we should initialise it with zeros.
>
> How about this?
>
> ---8<---
> mm: compaction: Iron out isolate_freepages_block() and
> isolate_freepages_range()
>
> Andrew pointed out that isolate_freepages_block() is "straggly" and
> isolate_freepages_range() is making assumptions on how compact_control is
> used which is delicate. This patch straightens isolate_freepages_block()
> and makes it fly straight and initialses compact_control to zeros in
> isolate_freepages_range(). The code should be easier to follow and
> is functionally equivalent. The CMA failure path is now a little more
> expensive but that is a marginal corner-case.
>
> Signed-off-by: Mel Gorman
Acked-by: Minchan Kim
--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
ock for as long as possible. In the
> event there are no free pages in the pageblock then the lock will not be
> acquired at all which reduces contention on zone->lock.
>
> Signed-off-by: Mel Gorman
> Acked-by: Rik van Riel
Acked-by: Minchan Kim
--
Kind regards,
Minchan Kim
--
Arcangeli
> Signed-off-by: Shaohua Li
> Signed-off-by: Mel Gorman
> Acked-by: Rik van Riel
Acked-by: Minchan Kim
--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
> }
> @@ -444,6 +460,13 @@ isolate_migratepages_range(struct zone *zone, struct
> compact_control *cc,
> ++low_pfn;
> break;
> }
> +
> + continue;
> +
> +next_pageblock:
> + lo
On Thu, Dec 22, 2011 at 12:57:40PM +, Stefan Hajnoczi wrote:
> On Wed, Dec 21, 2011 at 1:00 AM, Minchan Kim wrote:
> > This patch is follow-up of Christohp Hellwig's work
> > [RFC: ->make_request support for virtio-blk].
> > http://thread.gmane.org/gmane.linux
On Thu, Dec 22, 2011 at 10:45:06AM -0500, Vivek Goyal wrote:
> On Thu, Dec 22, 2011 at 10:05:38AM +0900, Minchan Kim wrote:
>
> [..]
> > > May be using deadline or noop in guest is better to benchmark against
> > > PCI-E based flash.
> >
> > Good suggestio
Hi Vivek,
On Wed, Dec 21, 2011 at 02:11:17PM -0500, Vivek Goyal wrote:
> On Wed, Dec 21, 2011 at 10:00:48AM +0900, Minchan Kim wrote:
> > This patch is follow-up of Christohp Hellwig's work
> > [RFC: ->make_request support for virtio-blk].
> > http://thread.gmane.o
Hi Sasha!
On Wed, Dec 21, 2011 at 10:28:52AM +0200, Sasha Levin wrote:
> On Wed, 2011-12-21 at 10:00 +0900, Minchan Kim wrote:
> > This patch is follow-up of Christohp Hellwig's work
> > [RFC: ->make_request support for virtio-blk].
> > http://thread.gmane.o
Hi Rusty,
On Wed, Dec 21, 2011 at 03:38:03PM +1030, Rusty Russell wrote:
> On Wed, 21 Dec 2011 10:00:48 +0900, Minchan Kim wrote:
> > This patch is follow-up of Christohp Hellwig's work
> > [RFC: ->make_request support for virtio-blk].
> > http://thread.gmane.o
ue.
If non-contiguous I/O issue or pass 1ms, batch queue would be drained.
Signed-off-by: Minchan Kim
---
drivers/block/virtio_blk.c | 366 +++-
1 files changed, 331 insertions(+), 35 deletions(-)
diff --git a/drivers/block/virtio_blk.c b/drivers/block/vi
This patch emulates flush/fua on virtio-blk and pass xfstest on ext4.
But it needs more reviews.
Signed-off-by: Minchan Kim
---
drivers/block/virtio_blk.c | 89 ++-
1 files changed, 86 insertions(+), 3 deletions(-)
diff --git a/drivers/block
ioctl but it's just
used for ioctl, not file system.
Signed-off-by: Christoph Hellwig
Signed-off-by: Minchan Kim
---
drivers/block/virtio_blk.c | 303
1 files changed, 247 insertions(+), 56 deletions(-)
diff --git a/drivers/block/virtio_bl
From: Christoph Hellwig
Signed-off-by: Christoph Hellwig
Signed-off-by: Minchan Kim
---
drivers/block/virtio_blk.c | 10 --
1 files changed, 0 insertions(+), 10 deletions(-)
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 4d0b70a..26d4443 100644
--- a
al or corrections for
the commit message are welcome.
Signed-off-by: Christoph Hellwig
Signed-off-by: Minchan Kim
---
drivers/virtio/virtio_ring.c | 33 ++---
include/linux/virtio.h | 21 +
2 files changed, 47 insertions(+), 7 deletions(-)
diff
From: Christoph Hellwig
Add a helper to map a bio to a scatterlist, modelled after blk_rq_map_sg.
This helper is useful for any driver that wants to create a scatterlist
from its ->make_request method.
Signed-off-by: Christoph Hellwig
Signed-off-by: Minchan Kim
---
block/blk-merg
5.976 24.846
This patch enhances ramdom I/O performance compared to request-based I/O path.
It's still RFC so welcome to any comment and review.
Christoph Hellwig (3):
block: add bio_map_sg
virtio: support unlocked queue kick
virtio-blk: remove the unused list of pending requests
Mi
/* Prod other side to tell it about changes. */
> + vq->notify(_vq);
> +}
> +EXPORT_SYMBOL_GPL(virtqueue_notify);
> +
> +void virtqueue_kick(struct virtqueue *vq)
> +{
> + if (virtqueue_kick_prepare(vq))
> + virtqueue_notify(vq);
> }
> -EXPORT_SYMBOL_GPL(
cks from people who use/know
> hugetlbfs. Who would that be? I'm adding random people who have
> acked/signed-off patches to hugetlbfs recently..
At least, code itself looks good to me but your random choice was failed.
Maybe people you want are as follows.
http://marc.info/?t=126928975800
On Tue, Apr 26, 2011 at 6:39 PM, Dave Young wrote:
> On Tue, Apr 26, 2011 at 5:28 PM, Minchan Kim wrote:
>> Please resend this with [2/2] to linux-mm.
>>
>> On Tue, Apr 26, 2011 at 5:59 PM, Dave Young
>> wrote:
>>> When memory pressure is high, virti
is used by
do_try_to_free_pages in mmotm so it could make unnecessary oom kill.
BTW, I can't understand why we need to handle virtio by special.
Could you explain it in detail? :)
--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Feb 14, 2011 at 2:33 AM, Balbir Singh wrote:
> * MinChan Kim [2011-02-10 14:41:44]:
>
>> I don't know why the part of message is deleted only when I send you.
>> Maybe it's gmail bug.
>>
>> I hope mail sending is successful in this turn. :)
>&g
I don't know why the part of message is deleted only when I send you.
Maybe it's gmail bug.
I hope mail sending is successful in this turn. :)
On Thu, Feb 10, 2011 at 2:33 PM, Minchan Kim wrote:
> Sorry for late response.
>
> On Fri, Jan 28, 2011 at 8:18 PM, Balbir Singh
&g
Sorry for late response.
On Fri, Jan 28, 2011 at 8:18 PM, Balbir Singh wrote:
> * MinChan Kim [2011-01-28 16:24:19]:
>
>> >
>> > But the assumption for LRU order to change happens only if the page
>> > cannot be successfully freed, which means it is in some
On Fri, Jan 28, 2011 at 3:48 PM, Balbir Singh wrote:
> * MinChan Kim [2011-01-28 14:44:50]:
>
>> On Fri, Jan 28, 2011 at 11:56 AM, Balbir Singh
>> wrote:
>> > On Thu, Jan 27, 2011 at 4:42 AM, Minchan Kim wrote:
>> > [snip]
>> >
>> >>>
On Fri, Jan 28, 2011 at 11:56 AM, Balbir Singh
wrote:
> On Thu, Jan 27, 2011 at 4:42 AM, Minchan Kim wrote:
> [snip]
>
>>> index 7b56473..2ac8549 100644
>>> --- a/mm/page_alloc.c
>>> +++ b/mm/page_alloc.c
>>> @@ -1660,6 +1660,9 @@ zonelist_scan:
);
> +
> if (!zone_watermark_ok_safe(zone, order,
> high_wmark_pages(zone), 0, 0)) {
> end_zone = i;
> @@ -2396,6 +2425,11 @@ loop_again:
> continue;
>
>
On Thu, Dec 23, 2010 at 5:33 PM, Balbir Singh wrote:
> * MinChan Kim [2010-12-14 20:02:45]:
>
>> > + if (should_reclaim_unmapped_pages(zone))
>> > + wakeup_kswapd(zone, order);
>>
>> I think we can pu
On Tue, Dec 14, 2010 at 8:45 PM, Balbir Singh wrote:
> * MinChan Kim [2010-12-14 19:01:26]:
>
>> Hi Balbir,
>>
>> On Fri, Dec 10, 2010 at 11:31 PM, Balbir Singh
>> wrote:
>> > Move reusable functionality outside of zone_reclaim.
>> > Make zone_rec
ED_PAGE_RATIO 16
> +
> +bool should_reclaim_unmapped_pages(struct zone *zone)
> +{
> + if (unlikely(unmapped_page_control) &&
> + (zone_unmapped_file_pages(zone) >
> + UNMAPPED_PAGE_RATIO * zone->min_unmapped_pages))
> + return true;
> + return false;
> +}
> +#endif
> +
>
> /*
> * page_evictable - test whether a page is evictable
>
>
As I look first, the code isn't good shape.
But more important thing is how many there are users.
Could we pay cost to maintain feature few user?
It depends on your effort which proves the usage cases and benefit.
It would be good to give a real data.
If we want really this, how about the new cache lru idea as Kame suggests?
For example, add_to_page_cache_lru adds the page into cache lru.
page_add_file_rmap moves the page into inactive file.
page_remove_rmap moves the page into lru cache, again.
We can count the unmapped pages and if the size exceeds limit, we can
wake up kswapd.
whenever the memory pressure happens, first of all, reclaimer try to
reclaim cache lru.
It can enhance reclaim latency and is good to embedded people.
--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
nmapped pages.
The function name is rather strange.
It would be better to add scan_control setup in function inner to
reclaim only unmapped pages.
--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
many page cache pages are mapped at address space of
application(ex, android uses java model so many pages are mapped by
application's address space). It means it's hard to reclaim them
without lagging.
So I have a interest in this patch.
--
Kind regards,
Minchan Kim
--
To unsubscribe from
ages" in your function.
Maybe, caller of this function have to handle sc->may_unmap. So, this
function's naming
is not good.
As I see your logic, the function name would be just "zone_reclaim_pages"
If you want to name it with zone_reclaim_unmapped_pages, it could be
bette
ata pages, it's
not a big.
In this case, Maintaining code simple is better than little bit
performance overhead.
>
> Signed-off-by: Dave Young
Reviewed-by: Minchan Kim
Isn't it useful in nommu, either?
> ---
> include/linux/vmalloc.h | 1 +
> mm/vmalloc.c
esn't look like it's out of memory to me, so I'm not sure what is
> going on.
>
> Rgds
> --
> -- Pierre Ossman
>
> Linux kernel, MMC maintainer http://www.kernel.org
> rdesktop, core developer http://www.rdesktop.org
> TigerVNC, co
51 matches
Mail list logo