NPU PHB TCE Kill register is exactly the same as in the rest of POWER8
so let's reuse the existing code for NPU. The only bit missing is
a helper to reset the entire TCE cache so this moves such a helper
from NPU code and renames it.
Since pnv_npu_tce_invalidate() does really invalidate the entire
As in fact pnv_pci_ioda2_tce_invalidate_entire() invalidates TCEs for
the specific PE rather than the entire cache, rename it to
pnv_pci_ioda2_tce_invalidate_pe(). In later patches we will add
a proper pnv_pci_ioda2_tce_invalidate_entire().
Signed-off-by: Alexey Kardashevskiy
Reviewed-by: David G
We are going to have multiple different types of PHB on the same system
with POWER8 + NVLink and PHBs will have different IOMMU ops. However
we only really care about one callback - create_table - so we can
relax the compatibility check here.
Signed-off-by: Alexey Kardashevskiy
Reviewed-by: David
This replaces magic constants for TCE Kill IODA2 register with macros.
Signed-off-by: Alexey Kardashevskiy
Reviewed-by: David Gibson
---
arch/powerpc/platforms/powernv/pci-ioda.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c
PCI-Express spec says that reading 4 bytes at offset 100h should return
zero if there is no extended capability so VFIO reads this dword to
know if there are extended capabilities.
However it is not always possible to access the extended space so
generic PCI code in pci_cfg_space_size_ext() checks
IBM POWER8 NVlink systems come with Tesla K40-ish GPUs each of which
also has a couple of fast speed links (NVLink). The interface to links
is exposed as an emulated PCI bridge which is included into the same
IOMMU group as the corresponding GPU.
In the kernel, NPUs get a separate PHB of the PNV_P
This exports debugging helper pe_level_printk() and corresponding macroses
so they can be used in npu-dma.c.
Signed-off-by: Alexey Kardashevskiy
---
arch/powerpc/platforms/powernv/pci-ioda.c | 9 +
arch/powerpc/platforms/powernv/pci.h | 9 +
2 files changed, 10 insertions(+)
NPU devices are emulated in firmware and mainly used for NPU NVLink
training; one NPU device is per a hardware link. Their DMA/TCE setup
must match the GPU which is connected via PCIe and NVLink so any changes
to the DMA/TCE setup on the GPU PCIe device need to be propagated to
the NVLink device as
The pnv_ioda_pe struct keeps an array of peers. At the moment it is only
used to link GPU and NPU for 2 purposes:
1. Access NPU quickly when configuring DMA for GPU - this was addressed
in the previos patch by removing use of it as DMA setup is not what
the kernel would constantly do.
2. Invalida
IBM POWER8 NVlink systems contain usual Tesla K40-ish GPUs but also
contain a couple of really fast links between GPU and CPU. These links
are exposed to the userspace by the OPAL firmware as bridges.
In order to make these links work when GPU is passed to the guest,
these bridges need to be passed
The upcoming NVLink passthrough support will require NPU code to cope
with two DMA windows.
This adds a pnv_npu_set_window() helper which programs 32bit window to
the hardware. This also adds multilevel TCE support.
This adds a pnv_npu_unset_window() helper which removes the DMA window
from the h
This uses the page size from iommu_table instead of hard-coded 4K.
This should cause no change in behavior.
While we are here, move bits around to prepare for further rework
which will define and use iommu_table_group_ops.
Signed-off-by: Alexey Kardashevskiy
Reviewed-by: David Gibson
Reviewed-b
The patch
ASoC: fsl_ssi: add CCSR_SSI_SOR to volatile register list
has been applied to the asoc tree at
git://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git
All being well this means that it will be integrated into the linux-next
tree (usually sometime in the next 24 hours) a
Hi Andy,
On 04/28/2016 02:53 PM, Andy Lutomirski wrote:
> On 04/28/2016 08:18 AM, Christopher Covington wrote:
>> Please take a look at the following prototype of sharing the PowerPC
>> VDSO unmap and remap code with other architectures. I've only hooked
>> up arm64 to begin with. If folks think t
From: "Aneesh Kumar K.V"
The driver was requesting for a writethrough mapping. But with those
flags we will end up with an SAO mapping because we now have memory
conherence always enabled. ie, the existing mapping will end up with a
WIMG value 0b1110 which is Strong Access Order.
Update this to
Testing done by Paul Mackerras has shown that with a modern compiler
there is no negative effect on code generation from enabling
STRICT_MM_TYPECHECKS.
So remove the option, and always use the strict type definitions.
Acked-by: Paul Mackerras
Signed-off-by: Michael Ellerman
---
arch/powerpc/Kc
From: "Aneesh Kumar K.V"
pmd_hugepage_update() is inside #ifdef CONFIG_TRANSPARENT_HUGEPAGE. THP
can only be enabled if PPC_BOOK3S_64=y && PPC_64K_PAGES=y, aka. hash64.
On hash64 we always define PTE_ATOMIC_UPDATES to 1, meaning the #ifdef
in pmd_hugepage_update() is unnecessary, so drop it.
Th
We have five locations in 64-bit hash MMU code that do a cmpxchg() of a
PTE. Currently doing it inline OK, but in a future patch we will be
converting the PTEs to __be64 in some configs. In that case we will need
casts at every cmpxchg() site in order to keep sparse happy.
So move the logic into a
From: "Aneesh Kumar K.V"
Traditionally Power server machines have used the Hashed Page Table MMU
mode. In this mode Linux manages its own tree of nested page tables,
aka. "the Linux page tables", which are not used by the hardware
directly, and software loads translations into the hash page table
We can avoid doing endian conversions by using pte_raw() in pxx_same().
The swap of the constant (_PAGE_HPTEFLAGS) should be done at compile
time by the compiler.
Signed-off-by: Michael Ellerman
---
arch/powerpc/include/asm/book3s/64/hash-64k.h | 2 +-
arch/powerpc/include/asm/book3s/64/hash.h
From: "Aneesh Kumar K.V"
This splits the _PAGE_RW bit into _PAGE_READ and _PAGE_WRITE. It also
removes the dependency on _PAGE_USER for implying read only. Few things
to note here is that, we have read implied with write and execute
permission. Hence we should always find _PAGE_READ set on hash p
From: "Aneesh Kumar K.V"
Subpage protection used to depend on the _PAGE_USER bit to implement no
access mode. This patch switches that to use _PAGE_RWX. We clear Read,
Write and Execute access from the pte instead of clearing _PAGE_USER
now. This was done so that we can switch to _PAGE_PRIVILEGED
In a subsequent patch we want to add a second definition of pte_user().
Before we do that, make the signature clear, ie. it takes a pte_t and
returns bool.
We move it up inside the existing #ifndef __ASSEMBLY__ block, but
otherwise it's a straight conversion.
Convert the call in settlbcam(), whic
From: "Aneesh Kumar K.V"
We have a common declaration in pte-common.h Add a book3s specific one
and switch to pte_user() in callchain.c. In a subsequent patch we will
switch _PAGE_USER to _PAGE_PRIVILEGED in the book3s version only.
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerma
From: "Aneesh Kumar K.V"
PTE_RPN_SHIFT is actually page size dependent. Even though PowerISA 3.0
expects only the lower 12 bits to be zero, we will always find the pages
to be PAGE_SHIFT aligned. In case of hash config, this also allows us to
use the additional 3 bits to track pte specific inform
From: "Aneesh Kumar K.V"
PS3 had used a PPP bit hack to implement a read only mapping in the
kernel area. Since we are bolting the ioremap area, it used the pte
flags _PAGE_PRESENT | _PAGE_USER to get a PPP value of 0x3 there by
resulting in a read only mapping. This means the area can be accesse
From: "Aneesh Kumar K.V"
_PAGE_PRIVILEGED means the page can be accessed only by the kernel. This
is done to keep pte bits similar to PowerISA 3.0 Radix PTE format. User
pages are now marked by clearing _PAGE_PRIVILEGED bit.
Previously we allowed the kernel to have a privileged page in the lower
From: "Aneesh Kumar K.V"
The radix variant is going to require a flush_pmd_tlb_range(). With
flush_pmd_tlb_range() added, pmdp_clear_flush_young() is the same as the
generic version. So drop the powerpc specific variant.
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch
From: "Aneesh Kumar K.V"
The radix variant is going to require a flush_tlb_range(). With
flush_tlb_range() added, ptep_clear_flush_young() is the same as the
generic version. So drop the powerpc specific variant.
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc
From: "Aneesh Kumar K.V"
Start moving code that is generic between radix and hash to book3s64
specific headers from the book3s64 hash specific one.
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/include/asm/book3s/32/mmu-hash.h | 6 +--
arch/powerpc/include/
From: "Aneesh Kumar K.V"
Add structs and #defines related to the radix MMU partition table
format. We also add a ppc_md callback for updating a partition table
entry.
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/include/asm/book3s/64/mmu.h | 31
From: "Aneesh Kumar K.V"
This patch reduces the number of #ifdefs in C code and will also help in
adding radix changes later. Only code movement in this patch.
Signed-off-by: Aneesh Kumar K.V
[mpe: Propagate copyrights and update GPL text]
Signed-off-by: Michael Ellerman
---
arch/powerpc/mm/M
From: "Aneesh Kumar K.V"
This helps to make following hash only pte bits easier.
We have kept _PAGE_CHG_MASK, _HPAGE_CHG_MASK and _PAGE_PROT_BITS as it
is in this patch eventhough they use hash specific bits. Using them in
radix as it is should be ok, because with radix we expect those bit
posit
From: "Aneesh Kumar K.V"
I am splitting this as a separate patch to get better review. If ok
we should merge this with previous patch.
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/include/asm/book3s/64/hash.h | 6 ++
1 file changed, 2 insertions(+), 4 d
From: "Aneesh Kumar K.V"
Now that we have moved book3s hash64 Linux pte bits to match Power ISA
3.0 radix pte bit positions, we move the matching pte bits to a common
header.
Only code movement in this patch. No functionality change.
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Eller
From: "Aneesh Kumar K.V"
PowerISA 3.0 introduces two pte bits with the below meaning for radix:
00 -> Normal Memory
01 -> Strong Access Order (SAO)
10 -> Non idempotent I/O (Cache inhibited and guarded)
11 -> Tolerant I/O (Cache inhibited)
We drop the existing WIMG bits in the Linux page
From: "Aneesh Kumar K.V"
PowerISA 3.0 adds a parition table indexed by LPID. Parition table
allows us to specify the MMU model that will be used for guest and host
translation.
This patch adds support with SLB based hash model (UPRT = 0). What is
required with this model is to support the new ha
From: "Aneesh Kumar K.V"
Use a helper instead of open coding with constants. A later patch will
drop the WIMG bits and use PowerISA 3.0 defines.
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/kernel/btext.c | 2 +-
arch/powerpc/kernel/isa-bridge.c | 4 ++
From: "Aneesh Kumar K.V"
These pte functions will remain the same between radix and hash. Move
them to pgtable.h.
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/include/asm/book3s/64/hash.h| 205 --
arch/powerpc/include/asm/book3s/
From: "Aneesh Kumar K.V"
Radix and hash MMU models support different page table sizes. Make
the #defines a variable so that existing code can work with variable
sizes.
Slice related code is only used by hash, so use hash constants there. We
will replicate some of the boundary conditions with res
From: "Aneesh Kumar K.V"
Now that the page table size is a variable, we can move these to
generic pgtable.h.
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/include/asm/book3s/64/hash.h| 16
arch/powerpc/include/asm/book3s/64/pgtable.h | 1
From: "Aneesh Kumar K.V"
Only code movement. No functionality change.
Signed-off-by: Aneesh Kumar K.V
Acked-by: Balbir Singh
Signed-off-by: Michael Ellerman
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 142 +--
1 file changed, 71 insertions(+), 71 deletions(-)
From: "Aneesh Kumar K.V"
This adds Power ISA 3.0 specific pte defines. We share most of the
details with hash Linux page table format. This patch indicates only
things where we differ.
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/include/asm/book3s/64/pgtab
From: "Aneesh Kumar K.V"
In this patch we add the radix Kconfig and conditional check.
radix_enabled() is written to always return 0 here. Once we have all
needed radix changes added, we will update this to an mmu_feature check.
We need to add this early so that we can get it all build in the ea
From: "Aneesh Kumar K.V"
For those pte accessors, that operate on a different set of pte bits
between hash/radix, we add a generic variant that does a conditional
to hash linux or radix variant.
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/include/asm/book3
From: "Aneesh Kumar K.V"
Here we create pgtable-64/4k.h and move pmd accessors that are common
between hash and radix there. We can't do much sharing with 4K Linux
page size because 4K Linux page size with hash config doesn't support
THP. So for now it is empty. In later patches we will add funct
From: "Aneesh Kumar K.V"
This only does 64K Linux page support for now. 64K hash Linux config
THP needs to differentiate it from hugetlb huge page because with THP we
need to track hash pte slot information with respect to each subpage.
This is not needed with hugetlb hugepage, because we don't d
From: "Aneesh Kumar K.V"
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/include/asm/book3s/64/mmu.h | 20
arch/powerpc/include/asm/mmu.h | 14 +++---
arch/powerpc/mm/hash_utils_64.c | 6 +++---
3 files changed,
From: "Aneesh Kumar K.V"
This adds routines for early setup for radix. We use device tree
property "ibm,processor-radix-AP-encodings" to find supported page
sizes. If we don't find the above we consider 64K and 4K as supported
page sizes.
We do map vmemap using 2M page size if we can. The linear
From: "Aneesh Kumar K.V"
For hash we create vmemmap mapping using bolted hash page table entries.
For radix we fill the radix page table. The next patch will add the
radix details for creating vmemmap mappings.
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/i
From: "Aneesh Kumar K.V"
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 12
arch/powerpc/include/asm/book3s/64/radix.h | 6 ++
arch/powerpc/mm/pgtable-radix.c | 20
3 files
From: "Aneesh Kumar K.V"
How we switch MMU context differs between hash and radix. For hash we
need to switch the SLB details and for radix we need to switch the PID
SPR.
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/include/asm/mmu_context.h | 25 ++
From: "Aneesh Kumar K.V"
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/include/asm/mmu_context.h | 4
arch/powerpc/mm/mmu_context_hash64.c | 43 +++---
2 files changed, 39 insertions(+), 8 deletions(-)
diff --git a/arch/po
From: "Aneesh Kumar K.V"
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/include/asm/book3s/64/tlbflush-hash.h | 28 ++-
arch/powerpc/include/asm/book3s/64/tlbflush.h | 56 ++
arch/powerpc/include/asm/tlbflush.h|
From: "Aneesh Kumar K.V"
This file now contains both hash and radix specific code. Rename it to
indicate this better.
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/mm/Makefile | 7 +++
arch/powerpc/mm/{mmu_context_
From: "Aneesh Kumar K.V"
We are going to add asm changes in the follow up patches. Add the
feature bit now so that we can get it all build.
mpe: Note that if CONFIG_PPC_RADIX_MMU=n we define MMU_FTR_RADIX to
zero. This has the effect of turning all the radix_enabled() checks into
if (0), which t
From: "Aneesh Kumar K.V"
We also use MMU_FTR_RADIX to branch out from code path specific to
hash.
No functionality change.
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/kernel/entry_64.S | 7 +--
arch/powerpc/kernel/exceptions-64s.S | 28
From: "Aneesh Kumar K.V"
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/platforms/pseries/lpar.c| 14 +++---
arch/powerpc/platforms/pseries/lparcfg.c | 3 ++-
2 files changed, 13 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/platforms/p
From: "Aneesh Kumar K.V"
Radix doesn't need slice support. Catch incorrect usage of slice code
when radix is enabled.
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/mm/slice.c | 16
1 file changed, 16 insertions(+)
diff --git a/arch/powerpc/
From: "Aneesh Kumar K.V"
On return from RTAS we access the paca variables and we have 64 bit
disabled. This requires us to limit paca in 32 bit range.
Fix this by setting ppc64_rma_size to first_memblock_size/1G range.
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/
From: "Aneesh Kumar K.V"
Core kernel doesn't track the page size of the VA range that we are
invalidating. Hence we end up flushing TLB for the entire mm here. Later
patches will improve this.
We also don't flush page walk cache separetly instead use RIC=2 when
flushing TLB, because we do a MMU
From: "Aneesh Kumar K.V"
Hash needs special get_unmapped_area() handling because of limitations
around base page size, so we have to set HAVE_ARCH_UNMAPPED_AREA.
With radix we don't have such restrictions, so we could use the generic
code. But because we've set HAVE_ARCH_UNMAPPED_AREA (for hash)
From: "Aneesh Kumar K.V"
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/mm/hash_utils_64.c | 10 +++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index e6b53cde676e..faaa
From: "Aneesh Kumar K.V"
This patch start to make a book3s variant for pgalloc headers. We have
multiple book3s specific changes such as:
* 4 level page table
* store physical address in higher level table
* use pte_t * for pgtable_t
Having a book3s64 specific variant helps to keep code si
From: "Aneesh Kumar K.V"
Simplify the code by dropping 4-level page table #ifdef. We are always
4-level now.
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/include/asm/book3s/64/pgalloc.h | 57 +---
1 file changed, 18 insertions(+), 39
From: "Aneesh Kumar K.V"
Only code cleanup. No functionality change.
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/include/asm/book3s/64/pgalloc.h | 12 ++--
arch/powerpc/include/asm/nohash/64/pgalloc.h | 12 ++--
arch/powerpc/mm/pgtable_64.c
From: "Aneesh Kumar K.V"
This patch switches 4K Linux page size config to use pte_t * type
instead of struct page * for pgtable_t. This simplifies the code a lot
and helps in consolidating both 64K and 4K page allocator routines. The
changes should not have any impact, because we already store ph
From: "Aneesh Kumar K.V"
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/include/asm/book3s/64/pgalloc.h | 34
arch/powerpc/include/asm/book3s/64/pgtable.h | 10 ++--
arch/powerpc/mm/hash_utils_64.c | 7 ++
arc
From: "Aneesh Kumar K.V"
Signed-off-by: Aneesh Kumar K.V
Acked-by: Balbir Singh
Signed-off-by: Michael Ellerman
---
arch/powerpc/mm/pgtable.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
index db277b6d8e8b..88a307504b5a 100644
--
From: "Aneesh Kumar K.V"
This moves the nohash variant of pgalloc headers to nohash/ directory
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/include/asm/book3s/32/pgalloc.h | 6 +++---
arch/powerpc/include/asm/book3s/64/pgalloc.h | 17 ++
From: "Aneesh Kumar K.V"
This reverts pgalloc related changes WRT implementing 4-level page
table for 64K Linux page size and storing of physical address in higher
level page tables since they are only applicable to book3s64 variant
and we now have a separate copy for book3s64. This helps to keep
From: "Aneesh Kumar K.V"
The vmalloc range differs between hash and radix config. Hence make
VMALLOC_START and related constants a variable which will be runtime
initialized depending on whether hash or radix mode is active.
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
From: "Aneesh Kumar K.V"
With radix there is no MMU cache. Hence we don't need to do anything in
update_mmu_cache().
Signed-off-by: Aneesh Kumar K.V
Acked-by: Balbir Singh
Signed-off-by: Michael Ellerman
---
arch/powerpc/mm/mem.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/pow
From: "Aneesh Kumar K.V"
In this patch we make the number of pte fragments per level 4 page table
page a variable. Radix level 4 table size is 256 bytes and hence we can
have 256 fragments per level 4 page. We don't update the fragment count
in this patch. We need to do performance measurements t
From: "Aneesh Kumar K.V"
Radix doesn't use the slice framework to find the page size. Hence use
vma to find the page size.
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/mm/hugetlbpage.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --gi
From: "Aneesh Kumar K.V"
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/include/asm/book3s/64/hugetlb-radix.h | 14
arch/powerpc/include/asm/hugetlb.h | 14
arch/powerpc/mm/Makefile | 1 +
arch/powerpc/mm/hu
From: "Aneesh Kumar K.V"
With 4K page size radix config our level 1 page table size is 64K and it
should be naturally aligned.
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/kernel/head_64.S | 13 -
1 file changed, 8 insertions(+), 5 deletions(-)
From: "Aneesh Kumar K.V"
We have hugepage at the pmd level with 4K radix config. Hence we don't
need to use hugepd format with radix.
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/include/asm/book3s/64/hash-4k.h| 22 +---
arch/powerpc/include/asm
From: "Aneesh Kumar K.V"
Only code movement in this patch. No functionality change.
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 24 +-
arch/powerpc/mm/pgtable-hash64.c | 358 ++
arch/po
From: "Aneesh Kumar K.V"
The deposited pgtable_t is a pte fragment hence we cannot use page->lru
for linking then together. We use the first two 64 bits for pte fragment
as list_head type to link all deposited fragments together. On withdraw
we properly zero then out.
Signed-off-by: Aneesh Kumar
Hi Linus,
Please pull a few more powerpc fixes for 4.6:
The following changes since commit 4705e02498d6d5a7ab98dfee9595cd5e91db2017:
powerpc: Update TM user feature bits in scan_features() (2016-04-18 20:10:45
+1000)
are available in the git repository at:
git://git.kernel.org/pub/scm/lin
From: "Aneesh Kumar K.V"
This adds THP support for 4K Linux page size config with radix. We still
don't do THP with 4K Linux page size and hash page table. Hash page
table needs a 16MB hugepage and we can't do THP with 16MM hugepage and
4K Linux page size.
We add missing functions to 4K hash con
From: "Aneesh Kumar K.V"
We use the existing "ibm,pa-features" device-tree property to enable
Radix MMU mode. This means we default to hash mode unless firmware tells
us it's OK to start using Radix mode.
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/kernel/
On Fri, Apr 29, 2016 at 09:45:58PM +0800, Minfei Huang wrote:
> On 04/28/16 at 03:44P, Josh Poimboeuf wrote:
> > In preparation for being able to determine whether a given stack trace
> > is reliable, allow the stacktrace_ops functions to propagate errors to
> > dump_trace().
>
> Hi, Josh.
>
> Ha
From: "Aneesh Kumar K.V"
Add #defines for Power ISA 3.0 software defined bits.
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/include/asm/
From: "Aneesh Kumar K.V"
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: Michael Ellerman
---
arch/powerpc/include/asm/book3s/64/hash-64k.h| 23 +++-
arch/powerpc/include/asm/book3s/64/pgtable-64k.h | 42 +--
arch/powerpc/include/asm/book3s/64/pgtable.h | 83 +++---
arch/
On Fri, 29 Apr 2016 18:55:14 +1000
Alexey Kardashevskiy wrote:
> PCI-Express spec says that reading 4 bytes at offset 100h should return
> zero if there is no extended capability so VFIO reads this dword to
> know if there are extended capabilities.
>
> However it is not always possible to acces
On Fri, 29 Apr 2016 18:55:15 +1000
Alexey Kardashevskiy wrote:
> We are going to have multiple different types of PHB on the same system
> with POWER8 + NVLink and PHBs will have different IOMMU ops. However
> we only really care about one callback - create_table - so we can
> relax the compatibi
On Thu, Apr 28, 2016 at 1:44 PM, Josh Poimboeuf wrote:
> A preempted function might not have had a chance to save the frame
> pointer to the stack yet, which can result in its caller getting skipped
> on a stack trace.
>
> Add a flag to indicate when the task has been preempted so that stack
> dum
On Thu, Apr 28, 2016 at 1:44 PM, Josh Poimboeuf wrote:
> Add the TIF_PATCH_PENDING thread flag to enable the new livepatch
> per-task consistency model for x86_64. The bit getting set indicates
> the thread has a pending patch which needs to be applied when the thread
> exits the kernel.
>
> The
On Thu, Apr 28, 2016 at 1:44 PM, Josh Poimboeuf wrote:
> Thanks to all the recent x86 entry code refactoring, most tasks' kernel
> stacks start at the same offset right above their saved pt_regs,
> regardless of which syscall was used to enter the kernel. That creates
> a nice convention which ma
On Fri, Apr 29, 2016 at 11:06:53AM -0700, Andy Lutomirski wrote:
> On Thu, Apr 28, 2016 at 1:44 PM, Josh Poimboeuf wrote:
> > A preempted function might not have had a chance to save the frame
> > pointer to the stack yet, which can result in its caller getting skipped
> > on a stack trace.
> >
>
On Fri, Apr 29, 2016 at 11:08:04AM -0700, Andy Lutomirski wrote:
> On Thu, Apr 28, 2016 at 1:44 PM, Josh Poimboeuf wrote:
> > Add the TIF_PATCH_PENDING thread flag to enable the new livepatch
> > per-task consistency model for x86_64. The bit getting set indicates
> > the thread has a pending pat
On Fri, Apr 29, 2016 at 1:11 PM, Josh Poimboeuf wrote:
> On Fri, Apr 29, 2016 at 11:06:53AM -0700, Andy Lutomirski wrote:
>> On Thu, Apr 28, 2016 at 1:44 PM, Josh Poimboeuf wrote:
>> > A preempted function might not have had a chance to save the frame
>> > pointer to the stack yet, which can resu
On Fri, Apr 29, 2016 at 01:19:23PM -0700, Andy Lutomirski wrote:
> On Fri, Apr 29, 2016 at 1:11 PM, Josh Poimboeuf wrote:
> > On Fri, Apr 29, 2016 at 11:06:53AM -0700, Andy Lutomirski wrote:
> >> On Thu, Apr 28, 2016 at 1:44 PM, Josh Poimboeuf
> >> wrote:
> >> > A preempted function might not ha
On Fri, Apr 29, 2016 at 02:46:10PM -0400, Brian Gerst wrote:
> On Thu, Apr 28, 2016 at 4:44 PM, Josh Poimboeuf wrote:
> > Thanks to all the recent x86 entry code refactoring, most tasks' kernel
> > stacks start at the same offset right above their saved pt_regs,
> > regardless of which syscall was
On Fri, Apr 29, 2016 at 1:27 PM, Josh Poimboeuf wrote:
> On Fri, Apr 29, 2016 at 01:19:23PM -0700, Andy Lutomirski wrote:
>> On Fri, Apr 29, 2016 at 1:11 PM, Josh Poimboeuf wrote:
>> > On Fri, Apr 29, 2016 at 11:06:53AM -0700, Andy Lutomirski wrote:
>> >> On Thu, Apr 28, 2016 at 1:44 PM, Josh Poi
Dma_pool_zalloc combines dma_pool_alloc and memset 0. The semantic patch
that makes this transformation is as follows: (http://coccinelle.lip6.fr/)
//
@@
expression d,e;
statement S;
@@
d =
-dma_pool_alloc
+dma_pool_zalloc
(...);
if (!d) S
-
Dma_pool_zalloc combines dma_pool_alloc and memset 0. The semantic patch
that makes this transformation is as follows: (http://coccinelle.lip6.fr/)
//
@@
type T;
T *d;
expression e;
statement S;
@@
d =
-dma_pool_alloc
+dma_pool_zalloc
(...);
> -Original Message-
> From: Julia Lawall [mailto:julia.law...@lip6.fr]
> Sent: Friday, April 29, 2016 3:09 PM
> To: Li Yang
> Cc: kernel-janit...@vger.kernel.org; Zhang Wei ; Vinod
> Koul ; Dan Williams ;
> linuxppc-dev@lists.ozlabs.org; dmaeng...@vger.kernel.org; linux-
> ker...@vger.k
1 - 100 of 116 matches
Mail list logo