ang
Fixes: 25078dc1f74b ("powerpc: use mm zones more sensibly")
Fixes: 9739ab7eda45 ("powerpc: enable a 30-bit ZONE_DMA for 32-bit pmac")
Signed-off-by: Andrea Arcangeli
---
arch/powerpc/mm/mem.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/mem.c
in
> invalidate_range() already.
>
> CC: Benjamin Herrenschmidt
> CC: Paul Mackerras
> CC: Michael Ellerman
> CC: Alistair Popple
> CC: Alexey Kardashevskiy
> CC: Mark Hairgrove
> CC: Balbir Singh
> CC: David Gibson
> CC: Andrea Arcangeli
> CC: Jerome Glisse
&
take a page pin by
> migrating pages from CMA region. Marking the section PF_MEMALLOC_NOCMA ensures
> that we avoid uncessary page migration later.
>
> Suggested-by: Andrea Arcangeli
> Signed-off-by: Aneesh Kumar K.V
Reviewed-by: Andrea Arcangeli
Hello,
On Tue, Jan 08, 2019 at 10:21:09AM +0530, Aneesh Kumar K.V wrote:
> @@ -187,41 +149,25 @@ static long mm_iommu_do_alloc(struct mm_struct *mm,
> unsigned long ua,
> goto unlock_exit;
> }
>
> + ret = get_user_pages_cma_migrate(ua, entries, 1, mem->hpages);
In terms
On Thu, Nov 02, 2017 at 06:25:11PM +0100, Laurent Dufour wrote:
> I think there is some memory barrier missing when the VMA is modified so
> currently the modifications done in the VMA structure may not be written
> down at the time the pte is locked. So doing that change will also requires
> to ca
Hello Laurent,
Message-ID: <7ca80231-fe02-a3a7-84bc-ce81690ea...@intel.com> shows
significant slowdown even for brk/malloc ops both single and
multi threaded.
The single threaded case I think is the most important because it has
zero chance of getting back any benefit later during page faults.
C
Hello Jerome,
On Fri, Sep 01, 2017 at 01:30:11PM -0400, Jerome Glisse wrote:
> +Case A is obvious you do not want to take the risk for the device to write to
> +a page that might now be use by some completely different task.
used
> +is true ven if the thread doing the page table update is preemp
date to new mmu_notifier semantic
> xen/gntdev: update to new mmu_notifier semantic
> KVM: update to new mmu_notifier semantic
> mm/mmu_notifier: kill invalidate_page
Reviewed-by: Andrea Arcangeli
Hi Aneesh,
On Mon, Oct 27, 2014 at 11:28:41PM +0530, Aneesh Kumar K.V wrote:
> VM_BUG_ON(address & ~HPAGE_PMD_MASK);
> if (pmd_trans_huge(*pmdp)) {
> pmd = pmdp_get_and_clear(vma->vm_mm, address, pmdp);
> } else {
The only problematic path that needs IPI is the bel
Hello,
On Mon, Oct 27, 2014 at 07:50:41AM +1100, Benjamin Herrenschmidt wrote:
> On Fri, 2014-10-24 at 09:22 -0700, James Bottomley wrote:
>
> > Parisc does this. As soon as one CPU issues a TLB purge, it's broadcast
> > to all the CPUs on the inter-CPU bus. The next instruction isn't
> > execu
Hi everyone,
On Fri, Nov 29, 2013 at 12:13:03PM +0100, Alexander Graf wrote:
>
> On 29.11.2013, at 05:38, Bharat Bhushan wrote:
>
> > Hi Alex,
> >
> > I am running KVM guest with host kernel having CONFIG_PREEMPT enabled. With
> > allocated pages things seems to work fine but I uses hugepages
ed operations
>
> For architectures like ppc64 we look at deposited pgtable when
> calling pmdp_get_and_clear. So do the pgtable_trans_huge_withdraw
> after finishing pmdp related operations.
>
> Cc: Andrea Arcangeli
> Signed-off-by: Aneesh Kumar K.V
> ---
> mm/h
Hi,
On Mon, Apr 22, 2013 at 03:30:52PM +0530, Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V"
>
> For architectures like ppc64 we look at deposited pgtable when
> calling pmdp_get_and_clear. So do the pgtable_trans_huge_withdraw
> after finishing pmdp related op
uld like to store them in the second half of pmd
>
> Cc: Andrea Arcangeli
*snip*
> #ifndef __HAVE_ARCH_PGTABLE_DEPOSIT
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> -void pgtable_trans_huge_deposit(struct mm_struct *mm, pgtable_t pgtable)
> +void pgtable_trans_hug
On Mon, Apr 22, 2013 at 03:30:50PM +0530, Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V"
>
> On archs like powerpc that support different hugepage sizes, HPAGE_SHIFT
> and other derived values like HPAGE_PMD_ORDER are not constants. So move
> that to hugepage_init
Hi Kirill,
On Tue, Sep 25, 2012 at 05:27:03PM +0300, Kirill A. Shutemov wrote:
> On Fri, Sep 14, 2012 at 07:52:10AM +0200, Ingo Molnar wrote:
> > Without repeatable hard numbers such code just gets into the
> > kernel and bitrots there as new CPU generations come in - a few
> > years down the li
On Thu, Aug 16, 2012 at 09:37:25PM +0300, Kirill A. Shutemov wrote:
> On Thu, Aug 16, 2012 at 08:29:44PM +0200, Andrea Arcangeli wrote:
> > On Thu, Aug 16, 2012 at 07:43:56PM +0300, Kirill A. Shutemov wrote:
> > > Hm.. I think with static_key we can avoid cache overh
On Thu, Aug 16, 2012 at 07:43:56PM +0300, Kirill A. Shutemov wrote:
> Hm.. I think with static_key we can avoid cache overhead here. I'll try.
Could you elaborate on the static_key? Is it some sort of self
modifying code?
> Thanks, for review. Could you take a look at huge zero page patchset? ;)
Hi Kirill,
On Thu, Aug 16, 2012 at 06:15:53PM +0300, Kirill A. Shutemov wrote:
> for (i = 0; i < pages_per_huge_page;
>i++, p = mem_map_next(p, page, i)) {
It may be more optimal to avoid a multiplication/shiftleft before the
add, and to do:
for (i = 0, vaddr = haddr; i
Hi,
On Wed, Jun 06, 2012 at 03:30:17PM +1000, Benjamin Herrenschmidt wrote:
> On Wed, 2012-06-06 at 00:46 +, Bhushan Bharat-R65777 wrote:
>
> > > >> memblock_end_of_DRAM() returns end_address + 1, not end address.
> > > >> While some code assumes that it returns end address.
> > > >
> > > > S
20 matches
Mail list logo