On Mon, Mar 16, 2020 at 07:13:24PM +0100, Christoph Hellwig wrote:
> On Mon, Mar 16, 2020 at 03:07:13PM -0300, Jason Gunthorpe wrote:
> > I chose this to be simple without having to goto unwind it.
> >
> > So, instead like this:
>
> As ѕaid, and per the previous discussion: I think just removing
On Mon, Mar 16, 2020 at 03:07:13PM -0300, Jason Gunthorpe wrote:
> I chose this to be simple without having to goto unwind it.
>
> So, instead like this:
As ѕaid, and per the previous discussion: I think just removing the
pgmap lookup is the right thing to do here. Something like this patch:
d
On Mon, Mar 16, 2020 at 10:02:50AM +0100, Christoph Hellwig wrote:
> On Wed, Mar 11, 2020 at 03:35:00PM -0300, Jason Gunthorpe wrote:
> > @@ -694,6 +672,15 @@ long hmm_range_fault(struct hmm_range *range, unsigned
> > int flags)
> > return -EBUSY;
> > ret = walk_pag
On Wed, Mar 11, 2020 at 03:35:00PM -0300, Jason Gunthorpe wrote:
> @@ -694,6 +672,15 @@ long hmm_range_fault(struct hmm_range *range, unsigned
> int flags)
> return -EBUSY;
> ret = walk_page_range(mm, hmm_vma_walk.last, range->end,
>
On 3/11/20 11:35 AM, Jason Gunthorpe wrote:
From: Jason Gunthorpe
The pgmap is held in the hmm_vma_walk variable in hope of speeding up
future get_dev_pagemap() calls by hitting the same pointer. The algorithm
doesn't actually care about how long the pgmap is held for.
Move the put of the ca
From: Jason Gunthorpe
The pgmap is held in the hmm_vma_walk variable in hope of speeding up
future get_dev_pagemap() calls by hitting the same pointer. The algorithm
doesn't actually care about how long the pgmap is held for.
Move the put of the cached pgmap to after the walk is completed and de