Hi,
Any updates on this?
On Saturday 20 February 2016 10:32 AM, Anju T wrote:
This short patch series adds the ability to sample the interrupted
machine state for each hardware sample.
To test this patchset,
Eg:
$ perf record -I? # list supported registers
output:
available registers:
On 03/07/2016 05:25 PM, David Gibson wrote:
On Mon, Mar 07, 2016 at 02:41:14PM +1100, Alexey Kardashevskiy wrote:
The existing in-kernel TCE table for emulated devices contains
guest physical addresses which are accesses by emulated devices.
Since we need to keep this information for VFIO device
For partition running on PHYP, there can be a adjunct partition
which shares the virtual address range with the operating system.
Virtual address ranges which can be used by the adjunct partition
are communicated with virtual device node of the device tree with
a property known as "ibm,reserved-vir
On Fri, 2016-03-04 at 09:56 +0100, Jiri Kosina wrote:
> On Fri, 4 Mar 2016, Michael Ellerman wrote:
>
> > Obviously it depends heavily on the content of my series, which will go into
> > powerpc#next, so it would make sense if this went there too.
> >
> > I don't see any changes in linux-next for l
This split _PAGE_RW bit to _PAGE_READ and _PAGE_WRITE. It also remove
the dependency on _PAGE_USER for implying read only. Few things to note
here is that, we have read implied with write and execute permission.
Hence we should always find _PAGE_READ set on hash pte fault.
We still can't switch PR
_PAGE_PRIVILEGED means the page can be accessed only by kernel. This is done
to keep pte bits similar to PowerISA 3.0 radix PTE format. User
pages are now makred by clearing _PAGE_PRIVILEGED bit.
Previously we allowed kernel to have a privileged page
in the lower address range(USER_REGION). With t
We had them as page size dependent. Use PAGE_SIZE instead. While there
remove them and define RPN_MASK better.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/hash-4k.h | 4
arch/powerpc/include/asm/book3s/64/hash-64k.h | 11 +--
arch/powerpc/include/asm/boo
PS3 had used PPP bit hack to implement a read only mapping in the
kernel area. Since we are bolt mapping the ioremap area, it used
the pte flags _PAGE_PRESENT | _PAGE_USER to get a PPP value of 0x3
there by resulting in a read only mapping. This means the area
can be accessed by user space, but ker
The radix variant is going to require a flush_pmd_tlb_range. With
flush_pmd_tlb_range added, pmdp_clear_flush_young is same as the generic
version. So drop the powerpc specific variant
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 3 ---
arch/powerpc/mm/pgta
The radix variant is going to require a flush_tlb_range. With
flush_tlb_range added, ptep_clear_flush_young is same as the generic
version. So drop the powerpc specific variant
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/hash.h | 23 +++
1 file chan
We want to use mmu_context_t between radix and hash. Move that mmuh.h
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/32/mmu-hash.h | 6 +--
arch/powerpc/include/asm/book3s/64/mmu-hash.h | 61 ++-
arch/powerpc/include/asm/book3s/64/mmu.h | 72 +++
We also add mach dep call back for updating partition table entry.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/mmu.h | 31 +--
arch/powerpc/include/asm/machdep.h | 1 +
arch/powerpc/include/asm/reg.h | 1 +
3 files changed,
PowerISA 3.0 introduce three pte bits with the below meaning
000 -> Normal Memory
001 -> Strong Access Order
010 -> Non idempotent I/O ( Also cache inhibited and guarded)
100 -> Tolerant I/O (Cache inhibited)
We drop the existing WIMG bits in linux page table in favour of above
contants. We loos
Subpage protection used to depend on _PAGE_USER bit to implement no
access mode. This patch switch that to use _PAGE_RWX. We clear READ
and Write access from pte instead of clearing _PAGE_USER now. This was
done to enable us to switch to _PAGE_PRIVILGED. subpage_protection()
returns the pte bits th
hugepd_free() used __get_cpu_var() once. Nothing ensured that the code
accessing the variable did not migrate from one CPU to another and soon
this was noticed by Tiejun Chen in 94b09d755462 ("powerpc/hugetlb:
Replace __get_cpu_var with get_cpu_var"). So we had it fixed.
Christoph Lameter was doin
This enables us to share the same page table code for
both radix and hash. Radix use a hardware defined big endian
page table
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/hash.h | 16 +++--
arch/powerpc/include/asm/kvm_book3s_64.h| 13 ++--
arch/powerpc/include/
We have common declaration in pte-common.h Add book3s specific one
and switch to pte_user. In the later patch we will be switching
_PAGE_USER to _PAGE_PRIVILEGED
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 5 +
arch/powerpc/perf/callchain.c
Use helper instead of opencoding with constants. Later patch will
drop the WIMG bits and use PowerISA 3.0 defines
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/kernel/btext.c | 2 +-
arch/powerpc/kernel/isa-bridge.c | 4 ++--
arch/powerpc/kernel/pci_64.c | 2 +-
arch/powerpc/mm/pgtab
This add support for p9 hash with UPRT=0. ie, we don't have
segment table support yet.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/mmu-hash.h | 13 +++--
arch/powerpc/mm/hash_native_64.c | 11 ++-
arch/powerpc/mm/hash_utils_64.c | 42
Sebastian Andrzej Siewior writes:
> [ text/plain ]
> hugepd_free() used __get_cpu_var() once. Nothing ensured that the code
> accessing the variable did not migrate from one CPU to another and soon
> this was noticed by Tiejun Chen in 94b09d755462 ("powerpc/hugetlb:
> Replace __get_cpu_var with g
On Mon, 2016-03-07 at 02:35 +, Qiang Zhao wrote:
> On Tue, Mar 05, 2016 at 12:26PM, Rob Herring wrote:
> > -Original Message-
> > From: Rob Herring [mailto:r...@kernel.org]
> > Sent: Saturday, March 05, 2016 12:26 PM
> > To: Qiang Zhao
> > Cc: o...@buserror.net; Yang-Leo Li ; Xiaobo Xi
Hi Aneesh,
[auto build test ERROR on powerpc/next]
[also build test ERROR on next-20160307]
[cannot apply to v4.5-rc7]
[if your patch is applied to the wrong git tree, please drop us a note to help
improving the system]
url:
https://github.com/0day-ci/linux/commits/Aneesh-Kumar-K-V/powerpc
csum_partial is often called for small fixed length packets
for which it is suboptimal to use the generic csum_partial()
function.
For instance, in my configuration, I got:
* One place calling it with constant len 4
* Seven places calling it with constant len 8
* Three places calling it with const
> On Mar 4, 2016, at 3:55 PM, Uma Krishnan wrote:
>
> From: "Manoj N. Kumar"
>
> The calls to pci_request_regions(), pci_resource_start(),
> pci_set_dma_mask(), pci_set_master() and pci_save_state() are all
> unnecessary for the IBM CXL flash adapter since data buffers
> are not required to be
> On Mar 4, 2016, at 3:55 PM, Uma Krishnan wrote:
>
> When operating in the PowerVM environment, the cxlflash module can
> receive an error from the hypervisor indicating that there are
> existing mappings in the page table for the process MMIO space.
>
> This issue exists because term_afu() cur
> On Mar 4, 2016, at 3:55 PM, Uma Krishnan wrote:
>
> In order to support cxlflash in the PowerVM environment, underlying
> hypervisor APIs have imposed a kernel API ordering change.
>
> For the superpipe access to LUN, user applications need a context.
> The cxlflash module creates this context
> On Mar 4, 2016, at 3:55 PM, Uma Krishnan wrote:
>
> From: "Manoj N. Kumar"
>
> With the current value of cmd_per_lun at 16, the throughput
> over a single adapter is limited to around 150kIOPS.
>
> Increase the value of cmd_per_lun to 256 to improve
> throughput. With this change a single ad
> On Mar 4, 2016, at 3:55 PM, Uma Krishnan wrote:
>
> From: "Manoj N. Kumar"
>
> When switching to the internal LUN defined on the
> IBM CXL flash adapter, there is an unnecessary
> scan occurring on the second port. This scan leads
> to the following extra lines in the log:
>
> Dec 17 10:09:0
From: Ian Munsie
This adds an afu_driver_ops structure with event_pending and
deliver_event callbacks. An AFU driver such as cxlflash can fill these
out and associate it with a context to enable passing custom AFU
specific events to userspace.
The cxl driver will call event_pending() during poll
From: Michael Neuling
This provides AFU drivers a means to associate private data with a cxl
context. This is particularly intended for make the new callbacks for
driver specific events easier for AFU drivers to use, as they can easily
get back to any private data structures they may use.
Signed
From: Varun Sethi
PAMU driver suspend and resume support.
Signed-off-by: Varun Sethi
Signed-off-by: Codrin Ciubotariu
---
drivers/iommu/fsl_pamu.c | 155 +--
1 file changed, 123 insertions(+), 32 deletions(-)
diff --git a/drivers/iommu/fsl_pamu.c b
Erratum A-007907 can cause a core hang under certain circumstances.
Part of the workaround involves not stashing to L1 Cache. On affected
chips, stash to L2 when L1 is requested.
Signed-off-by: Scott Wood
Signed-off-by: Varun Sethi
Signed-off-by: Shengzhou Liu
Signed-off-by: Codrin Ciubotariu
Signed-off-by: Codrin Ciubotariu
---
drivers/iommu/fsl_pamu.c| 92 +
drivers/iommu/fsl_pamu.h| 29 +++--
drivers/iommu/fsl_pamu_domain.c | 41 +++---
drivers/iommu/fsl_pamu_domain.h | 2 +-
4 files changed, 109 insertion
From: Varun Sethi
Enable OMT cache, before invalidating PAACT and SPAACT cache. This
is a PAMU hardware errata work around.
Signed-off-by: Varun Sethi
Signed-off-by: Codrin Ciubotariu
---
drivers/iommu/fsl_pamu.c | 10 ++
1 file changed, 10 insertions(+)
diff --git a/drivers/iommu/fs
From: Varun Sethi
Added cache controller compatible strings for T2080, B4420, T1040
and T1024. PAMU driver searches for a matching string while setting
up L3 cache stashing.
Signed-off-by: Varun Sethi
Signed-off-by: Codrin Ciubotariu
---
drivers/iommu/fsl_pamu.c | 10 ++
1 file change
This patchset addresses a few issues found on PAMU IOMMU
and small changes to enable power management and to support the
L3 cache controller on some newer boards.
The series starts with a clean-up patch, followed by two
errata fixes: A-007907 and A-005982. It continues with
two fixes for PCIe supp
From: Varun Sethi
Once the PCIe device assigned to a guest VM (via VFIO) gets detached
from the iommu domain (when guest terminates), its PAMU table entry
is disabled. So, this would prevent the device from being used once
it's assigned back to the host.
This patch allows for creation of a defau
From: Varun Sethi
Factor out PCI specific code in the PAMU driver.
Signed-off-by: Varun Sethi
Signed-off-by: Codrin Ciubotariu
---
drivers/iommu/fsl_pamu_domain.c | 77 ++---
1 file changed, 41 insertions(+), 36 deletions(-)
diff --git a/drivers/iommu/fsl_
In save_sprs() in process.c contains the following test:
if (cpu_has_feature(cpu_has_feature(CPU_FTR_ALTIVEC)))
t->vrsave = mfspr(SPRN_VRSAVE);
CPU feature with the mask 0x1 is CPU_FTR_COHERENT_ICACHE so the test
is equivilent to:
if (cpu_has_feature(CPU_FTR_ALTIV
On Mon, 7 Mar 2016, Michael Ellerman wrote:
> > This aligns with my usual workflow, so that'd be my preferred way of doing
> > things; i.e. you put all the ftrace changes into a separate topic branch,
> > and then
> >
> > - you pull that branch into powerpc#next
> > - I pull that branch into livep
On Mon, 2016-03-07 at 23:52 +0100, Jiri Kosina wrote:
> On Mon, 7 Mar 2016, Michael Ellerman wrote:
>
> > > This aligns with my usual workflow, so that'd be my preferred way of doing
> > > things; i.e. you put all the ftrace changes into a separate topic branch,
> > > and then
> > >
> > > - you pul
On Mon, Mar 07, 2016 at 11:52:31PM +0100, Jiri Kosina wrote:
> On Mon, 7 Mar 2016, Michael Ellerman wrote:
>
> > > This aligns with my usual workflow, so that'd be my preferred way of doing
> > > things; i.e. you put all the ftrace changes into a separate topic branch,
> > > and then
> > >
> > > -
On Mon, 2016-03-07 at 21:04 +0530, Aneesh Kumar K.V wrote:
> Sebastian Andrzej Siewior writes:
>
> While you are there, can you also fix the wrong indentation on line
> 423
> ?
.../...
Also this looks like stable material no ?
Cheers,
Ben.
___
Linu
The change looks to be correct. But we need better formatting and description.
Make the title something like:
dmaengine: fsldma: fix memory leak
On Thu, Dec 24, 2015 at 1:26 AM, Xuelin Shi wrote:
> From: Xuelin Shi
>
> missing unmap sources and destinations while doing dequeue.
How can this
On Thu, Dec 24, 2015 at 1:26 AM, Xuelin Shi wrote:
> From: Xuelin Shi
And please cc dmaengine maintainers and its mailing list when you send
next version.
>
> missing unmap sources and destinations while doing dequeue.
>
> Signed-off-by: Xuelin Shi
> ---
> drivers/dma/fsldma.c | 2 ++
> 1 fil
A couple of minor nits below...
> On Mar 7, 2016, at 12:59 PM, Ian Munsie wrote:
>
> @@ -346,7 +350,7 @@ ssize_t afu_read(struct file *file, char __user *buf,
> size_t count,
>
> for (;;) {
> prepare_to_wait(&ctx->wq, &wait, TASK_INTERRUPTIBLE);
> - if (ctx_even
> On Mar 7, 2016, at 12:59 PM, Ian Munsie wrote:
>
> From: Michael Neuling
>
> This provides AFU drivers a means to associate private data with a cxl
> context. This is particularly intended for make the new callbacks for
> driver specific events easier for AFU drivers to use, as they can easil
On Tue, Mar 01, 2016 at 08:40:12PM -0600, Rob Herring wrote:
>On Thu, Feb 18, 2016 at 9:16 PM, Gavin Shan wrote:
>> On Wed, Feb 17, 2016 at 08:59:53AM -0600, Rob Herring wrote:
>>>On Tue, Feb 16, 2016 at 9:44 PM, Gavin Shan
>>>wrote:
This renames unflatten_dt_node() to unflatten_dt_nodes()
On Tue, Mar 08, 2016 at 1:28AM, Scott Wood wrote:
> -Original Message-
> From: Scott Wood [mailto:o...@buserror.net]
> Sent: Tuesday, March 08, 2016 1:28 AM
> To: Qiang Zhao ; Rob Herring
> Cc: Yang-Leo Li ; Xiaobo Xie ;
> linux-ker...@vger.kernel.org; devicet...@vger.kernel.org; linuxppc-
Excerpts from Matt Ochs's message of 2016-03-08 11:26:55 +1100:
> Any reason for adding these extra lines as part of this commit?
mpe asked for some newlines here in the v1 submission, and it only
really made sense to do so if all the related sections had consistent
whitespace as well.
> > +/*
>
From: Ian Munsie
This adds an afu_driver_ops structure with event_pending and
deliver_event callbacks. An AFU driver such as cxlflash can fill these
out and associate it with a context to enable passing custom AFU
specific events to userspace.
The cxl driver will call event_pending() during poll
From: Michael Neuling
This provides AFU drivers a means to associate private data with a cxl
context. This is particularly intended for make the new callbacks for
driver specific events easier for AFU drivers to use, as they can easily
get back to any private data structures they may use.
Signed
On Tue, 08 Mar 2016 10:20:22 +1100
Michael Ellerman wrote:
> >
> > There is one remaining issue which I think would be really nice to
> > have(TM), and that's Steven's Ack for the whole thing :)
>
> Yeah. He's been on CC the whole time, but he's probably getting a bit sick of
> it all, as we'r
Currently, the only error that htab_remove_mapping() can report is -EINVAL,
if removal of bolted HPTEs isn't implemeted for this platform. We make
a few clean ups to the handling of this:
* EINVAL isn't really the right code - there's nothing wrong with the
function's arguments - use ENODEV i
At the moment the hpte_removebolted callback in ppc_md returns void and
will BUG_ON() if the hpte it's asked to remove doesn't exist in the first
place. This is awkward for the case of cleaning up a mapping which was
partially made before failing.
So, we add a return value to hpte_removebolted, a
This makes a number of cleanups to handling of mapping failures during
memory hotplug on Power:
For errors creating the linear mapping for the hot-added region:
* This is now reported with EFAULT which is more appropriate than the
previous EINVAL (the failure is unlikely to be related to the
This is an unfinished implementation of the kernel parts of the PAPR
hashed page table (HPT) resizing extension.
It contains a complete guest-side implementation - or as complete as
it can be until we have a final PAPR change.
It also contains a host side implementation for KVM HV (the KVM PR and
htab_get_table_size() either retrieve the size of the hash page table (HPT)
from the device tree - if the HPT size is determined by firmware - or
uses a heuristic to determine a good size based on RAM size if the kernel
is responsible for allocating the HPT.
To support a PAPR extension allowing re
We've now implemented code in the pseries platform to use the new PAPR
interface to allow resizing the hash page table (HPT) at runtime.
This patch uses that interface to automatically attempt to resize the HPT
when memory is hot added or removed. This tries to always keep the HPT at
a reasonable
The hypervisor needs to know a guest is capable of using the HPT resizing
PAPR extension in order to make full advantage of it for memory hotplug.
If the hypervisor knows the guest is HPT resize aware, it can size the
initial HPT based on the initial guest RAM size, relying on the guest to
resize
This adds the hypercall numbers and wrapper functions for the hash page
table resizing hypercalls.
These are experimental "platform specific" values for now, until we have a
formal PAPR update.
It also adds a new firmware feature flag to track the presence of the
HPT resizing calls.
Signed-off-b
At present KVM on powerpc always reports KVM_CAP_PPC_ALLOC_HTAB as enabled.
However, the ioctl() it advertises (KVM_PPC_ALLOCATE_HTAB) only actually
works on KVM HV. On KVM PR it will fail with ENOTTY.
qemu already has a workaround for this, so it's not breaking things in
practice, but it would b
This adds support for using experimental hypercalls to change the size
of the main hash page table while running as a PAPR guest. For now these
hypercalls are only in experimental qemu versions.
The interface is two part: first H_RESIZE_HPT_PREPARE is used to allocate
and prepare the new hash tab
This adds a new powerpc-specific KVM_CAP_SPAPR_RESIZE_HPT capability to
advertise whether KVM is capable of handling the PAPR extensions for
resizing the hashed page table during guest runtime.
At present, HPT resizing is possible with KVM PR without kernel
modification, since the HPT is managed w
Currently the kvm_hpt_info structure stores the hashed page table's order,
and also the number of HPTEs it contains and a mask for its size. The
last two can be easily derived from the order, so remove them and just
calculate them as necessary with a couple of helper inlines.
Signed-off-by: David
Currently, the powerpc kvm_arch structure contains a number of variables
tracking the state of the guest's hashed page table (HPT) in KVM HV. This
patch gathers them all together into a single kvm_hpt_info substructure.
This makes life more convenient for the upcoming HPT resizing
implementation.
Currently, kvmppc_alloc_hpt() both allocates a new hashed page table (HPT)
and sets it up as the active page table for a VM. For the upcoming HPT
resize implementation we're going to want to allocate HPTs separately from
activating them.
So, split the allocation itself out into kvmppc_allocate_hp
The difference between kvm_alloc_hpt() and kvmppc_alloc_hpt() is not at
all obvious from the name. In practice kvmppc_alloc_hpt() allocates an HPT
by whatever means, and clals kvm_alloc_hpt() which will attempt to allocate
it with CMA only.
To make this less confusing, rename kvm_alloc_hpt() to k
This updates the KVM_CAP_SPAPR_RESIZE_HPT capability to advertise the
presence of in-kernel HPT resizing on KVM HV. In fact the HPT resizing
isn't fully implemented, but this allows us to experiment with what's
there.
Signed-off-by: David Gibson
---
arch/powerpc/kvm/powerpc.c | 5 -
1 file
This patch adds a stub (always failing) implementation of the hypercalls
for the HPT resizing PAPR extension.
For now we include a hack which makes it safe for qemu to call ENABLE_HCALL
on these hypercalls, although it will have no effect. That should go away
once the PAPR change is formalized an
While there is an active HPT resize in progress, working out which guest
pages are dirty is rather more complicated, because depending on exactly
the phase of the resize the information could be in either the current,
tentative or previous HPT or reverse map of the guest.
To avoid this problem, fo
This implements the code for HPT resizing to actually pivot from the
currently active HPT to the new HPT, which has previously been populated
by rehashing entries from the old HPT.
This only occurs while the guest is executing the H_RESIZE_HPT_COMMIT
hypercall, handling synchronization with the gu
While an HPT resize operation is in progress, specifically when a tentative
HPT has been allocated and we are possibly in the middle of populating it
various host side MM events need to be reflected in the tentative resized
HPT as well as the currently active one.
This extends the powerpc KVM MMU
KVM on powerpc uses several MMU notifiers to update guest page tables and
reverse mappings based on host MM events. At these always act on the
guest's main active hash table and reverse mappings.
However, for HPT resizing we're going to need these to sometimes operate
on a tentative hash table or
This adds code to initialize an HPT resize operation, including allocating
a tentative new HPT and reverse maps. It also includes corresponding code
to free things afterwards.
Signed-off-by: David Gibson
---
arch/powerpc/kvm/book3s_64_mmu_hv.c | 42 +
1 file
This adds code for the "guts" of an HPT resize operation: rehashing HPTEs
from the current HPT into the new resized HPT.
This is performed by the HPT resize work thread, but is gated to occur only
while the guest is executing the H_RESIZE_HPT_COMMIT hypercall. The guest
is expected not to modify
This adds an outline (not yet working) of an implementation for the HPT
resizing PAPR extension. Specifically it adds the work function which will
see through the resizing workflow, and adds in the synchronization between
this and the HPT resizing hypercalls.
Signed-off-by: David Gibson
---
arc
The KVM_PPC_ALLOCATE_HTAB ioctl() is used to set the size of hashed page
table (HPT) that userspace expects a guest VM to have, and is also used to
clear that HPT when necessary (e.g. guest reboot).
At present, once the ioctl() is called for the first time, the HPT size can
never be changed therea
During an HPT resize operation we have two HPTs and sets of reverse maps
for the guest: the active one, and the tentative resized one. This means
that information about a host page's referenced / dirty state as affected
by the guest could end up in either HPT depending on exactly what moment
it ha
From: Li Zhang
Uptream has supported page parallel initialisation for X86 and the
boot time is improved greately. Some tests have been done for Power.
Here is the result I have done with different memory size.
* 4GB memory:
boot time is as the following:
with patch vs without patch: 10
From: Li Zhang
This patch is based on Mel Gorman's old patch in the mailing list,
https://lkml.org/lkml/2015/5/5/280 which is discussed but it is
fixed with a completion to wait for all memory initialised in
page_alloc_init_late(). It is to fix the OOM problem on X86
with 24TB memory which alloca
From: Li Zhang
Parallel initialisation has been enabled for X86, boot time is
improved greatly. On Power8, it is improved greatly for small
memory. Here is the result from my test on Power8 platform:
For 4GB memory: 57% is improved, boot time as the following:
with patch: 10s, without patch: 24.
> On Mar 7, 2016, at 7:48 PM, Ian Munsie wrote:
>
> From: Ian Munsie
>
> This adds an afu_driver_ops structure with event_pending and
> deliver_event callbacks. An AFU driver such as cxlflash can fill these
> out and associate it with a context to enable passing custom AFU
> specific events to
On Mon, Mar 07, 2016 at 08:38:13PM +1100, Alexey Kardashevskiy wrote:
> On 03/07/2016 05:25 PM, David Gibson wrote:
> >On Mon, Mar 07, 2016 at 02:41:14PM +1100, Alexey Kardashevskiy wrote:
> >>The existing in-kernel TCE table for emulated devices contains
> >>guest physical addresses which are acce
On Mon, Mar 07, 2016 at 06:32:23PM +1100, Alexey Kardashevskiy wrote:
> On 03/07/2016 05:05 PM, David Gibson wrote:
> >On Mon, Mar 07, 2016 at 02:41:12PM +1100, Alexey Kardashevskiy wrote:
> >>In real mode, TCE tables are invalidated using different
> >>cache-inhibited store instructions which is d
On 03/07/2016 05:00 PM, David Gibson wrote:
On Mon, Mar 07, 2016 at 02:41:11PM +1100, Alexey Kardashevskiy wrote:
VFIO on sPAPR already implements guest memory pre-registration
when the entire guest RAM gets pinned. This can be used to translate
the physical address of a guest page containing th
On Mon, Mar 07, 2016 at 02:41:15PM +1100, Alexey Kardashevskiy wrote:
> In-kernel VFIO acceleration needs different handling in real and virtual
> modes which makes it hard to support both modes in the same handler.
>
> This creates a copy of kvmppc_rm_h_stuff_tce and kvmppc_rm_h_put_tce
> in addi
On Tue, Mar 08, 2016 at 04:47:20PM +1100, Alexey Kardashevskiy wrote:
> On 03/07/2016 05:00 PM, David Gibson wrote:
> >On Mon, Mar 07, 2016 at 02:41:11PM +1100, Alexey Kardashevskiy wrote:
> >>VFIO on sPAPR already implements guest memory pre-registration
> >>when the entire guest RAM gets pinned.
Changelog v5:
1. Removed the mini-stack frame created for klp_return_helper.
As a result of the mini-stack frame, function with > 8
arguments could not be patched
2. Removed camel casing in the comments
Changelog v4:
1. Renamed klp_matchaddr() to klp_ge
On 09/02/16 10:57, Andrew Donnellan wrote:
It is a fix - I'm a bit hazy on the details now but IIRC, Daniel Axtens
and I encountered this when doing some cxl debugging, though I think we
decided not to tag this for stable since it was a secondary issue to the
primary bug we were looking for. It p
The mtmsr() function hangs during restart. Make reboot works on
MVME5100 removing that function call.
---
arch/powerpc/platforms/embedded6xx/mvme5100.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/arch/powerpc/platforms/embedded6xx/mvme5100.c
b/arch/powerpc/platforms/embedded6xx/mvme5100.
91 matches
Mail list logo