Christophe Leroy writes:
> The number of high slices a process might use now depends on its
> address space size, and what allocation address it has requested.
>
> This patch uses that limit throughout call chains where possible,
> rather than use the fixed SLICE_NUM_HIGH for bitmap operations.
>
On Tue, 27 Feb 2018 12:50:08 +0530
"Aneesh Kumar K.V" wrote:
> Christophe Leroy writes:
> +if ((start + len) > SLICE_LOW_TOP) {
> > + unsigned long start_index = GET_HIGH_SLICE_INDEX(start);
> > + unsigned long align_end = ALIGN(end, (1UL << SLICE_HIGH_SHIFT));
> > +
On Tue, 27 Feb 2018 12:59:53 +0530
"Aneesh Kumar K.V" wrote:
> Christophe Leroy writes:
>
> > The slice_mask cache was a basic conversion which copied the slice
> > mask into caller's structures, because that's how the original code
> > worked. In most cases the pointer can be used directly ins
On Tue, 27 Feb 2018 14:31:07 +0530
"Aneesh Kumar K.V" wrote:
> Christophe Leroy writes:
>
> > The number of high slices a process might use now depends on its
> > address space size, and what allocation address it has requested.
> >
> > This patch uses that limit throughout call chains where po
On Monday, February 26, 2018 11:25:09 AM CET Thorsten Leemhuis wrote:
> On 26.02.2018 04:05, Linus Torvalds wrote:
> > We're on the normal schedule for 4.16 and everything still looks very
> > regular.
>
> Hi! Find below my second regression report for Linux 4.16. It lists 8
> regressions I'm cur
cpm_cascade() doesn't have to call eoi() as it is already called
by handle_fasteoi_irq()
And cpm_get_irq() will always return an unsigned int so the test
is useless
Signed-off-by: Christophe Leroy
---
arch/powerpc/platforms/8xx/m8xx_setup.c | 8 +---
1 file changed, 1 insertion(+), 7 deleti
"Naveen N. Rao" wrote:
> I'm wondering if we can instead encode the bpf prog id in
> imm32. That way, we should be able to indicate the BPF
> function being called into. Daniel, is that something we
> can consider?
Since each subprog does not get a separate id, we cannot fetch
the fd and therefor
Nicholas Piggin writes:
> On Tue, 27 Feb 2018 14:31:07 +0530
> "Aneesh Kumar K.V" wrote:
>
>> Christophe Leroy writes:
>>
>> > The number of high slices a process might use now depends on its
>> > address space size, and what allocation address it has requested.
>> >
>> > This patch uses that
On 02/27/2018 01:13 PM, Sandipan Das wrote:
> "Naveen N. Rao" wrote:
>> I'm wondering if we can instead encode the bpf prog id in
>> imm32. That way, we should be able to indicate the BPF
>> function being called into. Daniel, is that something we
>> can consider?
>
> Since each subprog does not
On Tue, Feb 27, 2018 at 9:44 AM, Mathieu Malaterre wrote:
> On Tue, Feb 27, 2018 at 8:33 AM, Christophe LEROY
> wrote:
> Much simpler is just add
>
> if (ARRAY_SIZE() == 0)
> return;
>> Or add in front:
>> if (!ARRAY_SIZE(feature_properties))
>> return;
>
> (not tested
On Tue, Dec 12, 2017 at 01:12:37PM +0100, Miroslav Benes wrote:
>
> I think that this is not enough. You need to also implement
> save_stack_trace_tsk_reliable() for powerpc defined as __weak in
> kernel/stacktrace.c.
So here is my initial proposal. I'd really like to get the successful
exit st
On Tue, Feb 27, 2018 at 4:52 PM, Andy Shevchenko
wrote:
> On Tue, Feb 27, 2018 at 9:44 AM, Mathieu Malaterre wrote:
>> On Tue, Feb 27, 2018 at 8:33 AM, Christophe LEROY
>> wrote:
>
>> Much simpler is just add
>>
>> if (ARRAY_SIZE() == 0)
>> return;
>
>>> Or add in front:
>>> if
From: Simon Guo
In current days, many OS distributions have utilized transaction
memory functionality. In PowerPC, HV KVM supports TM. But PR KVM
does not.
The drive for the transaction memory support of PR KVM is the
openstack Continuous Integration testing - They runs a HV(hypervisor)
KVM(as l
From: Simon Guo
This patches add some macros for CR0/TEXASR bits so that PR KVM TM
logic(tbegin./treclaim./tabort.) can make use of them later.
Signed-off-by: Simon Guo
Reviewed-by: Paul Mackerras
---
arch/powerpc/include/asm/reg.h | 25 -
arch/powerpc/pla
From: Simon Guo
PR KVM will need to reuse msr_check_and_set().
This patch exports this API for reuse.
Signed-off-by: Simon Guo
Reviewed-by: Paul Mackerras
---
arch/powerpc/kernel/process.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/
From: Simon Guo
This patch exports tm_enable()/tm_disable/tm_abort() APIs, which
will be used for PR KVM transaction memory logic.
Signed-off-by: Simon Guo
Reviewed-by: Paul Mackerras
---
arch/powerpc/include/asm/asm-prototypes.h | 3 +++
arch/powerpc/include/asm/tm.h | 2 --
ar
From: Simon Guo
It is a simple patch just for moving kvmppc_save_tm/kvmppc_restore_tm()
functionalities to tm.S. There is no logic change. The reconstruct of
those APIs will be done in later patches to improve readability.
It is for preparation of reusing those APIs on both HV/PR PPC KVM.
Signe
From: Simon Guo
HV KVM and PR KVM need different MSR source to indicate whether
treclaim. or trecheckpoint. is necessary.
This patch add new parameter (guest MSR) for these kvmppc_save_tm/
kvmppc_restore_tm() APIs:
- For HV KVM, it is VCPU_MSR
- For PR KVM, it is current host MSR or VCPU_SHADOW_
From: Simon Guo
kvmppc_save_tm() invokes store_fp_state/store_vr_state(). So it is
mandatory to turn on FP/VSX/VMX MSR bits for its execution, just
like what kvmppc_restore_tm() did.
Previsouly HV KVM has turned the bits on outside of function
kvmppc_save_tm(). Now we include this bit change i
From: Simon Guo
Currently _kvmppc_save/restore_tm() APIs can only be invoked from
assembly function. This patch adds C function wrappers for them so
that they can be safely called from C function.
Signed-off-by: Simon Guo
---
arch/powerpc/include/asm/asm-prototypes.h | 6 ++
arch/powerpc/kvm/
From: Simon Guo
This patch simulates interrupt behavior per Power ISA while injecting
interrupt in PR KVM:
- When interrupt happens, transactional state should be suspended.
kvmppc_mmu_book3s_64_reset_msr() will be invoked when injecting an
interrupt. This patch performs this ISA logic in
kvmppc
From: Simon Guo
PowerPC TM functionality needs MSR TM/TS bits support in hardware level.
Guest TM functionality can not be emulated with "fake" MSR (msr in magic
page) TS bits.
This patch syncs TM/TS bits in shadow_msr with the MSR value in magic
page, so that the MSR TS value which guest sees i
From: Simon Guo
MSR TS bits can be modified with non-privileged instruction like
tbegin./tend. That means guest can change MSR value "silently" without
notifying host.
It is necessary to sync the TM bits to host so that host can calculate
shadow msr correctly.
note privilege guest will always
From: Simon Guo
Accordingly to ISA specification for RFID, in MSR TM disabled and TS
suspended state(S0), if the target MSR is TM disabled and TS state is
inactive(N0), rfid should suppress this update.
This patch make RFID emulation of PR KVM to be consistent with this.
Signed-off-by: Simon Gu
From: Simon Guo
PR KVM host usually equipped with enabled TM in its host MSR value, and
with non-transactional TS value.
When a guest with TM active traps into PR KVM host, the rfid at the
tail of kvmppc_interrupt_pr() will try to switch TS bits from
S0 (Suspended & TM disabled) to N1 (Non-trans
From: Simon Guo
This patch adds 2 new APIs: kvmppc_copyto_vcpu_tm() and
kvmppc_copyfrom_vcpu_tm(). These 2 APIs will be used to copy from/to TM
data between VCPU_TM/VCPU area.
PR KVM will use these APIs for treclaim. or trchkpt. emulation.
Signed-off-by: Simon Guo
---
arch/powerpc/kvm/book3s
From: Simon Guo
This patch adds 2 new APIs kvmppc_save_tm_sprs()/kvmppc_restore_tm_sprs()
for the purpose of TEXASR/TFIAR/TFHAR save/restore.
Signed-off-by: Simon Guo
Reviewed-by: Paul Mackerras
---
arch/powerpc/kvm/book3s_pr.c | 22 ++
1 file changed, 22 insertions(+)
di
From: Simon Guo
The transaction memory checkpoint area save/restore behavior is
triggered when VCPU qemu process is switching out/into CPU. ie.
at kvmppc_core_vcpu_put_pr() and kvmppc_core_vcpu_load_pr().
MSR TM active state is determined by TS bits:
active: 10(transactional) or 01 (suspende
From: Simon Guo
The math registers will be saved into vcpu->arch.fp/vr and corresponding
vcpu->arch.fp_tm/vr_tm area.
We flush or giveup the math regs into vcpu->arch.fp/vr before saving
transaction. After transaction is restored, the math regs will be loaded
back into regs.
If there is a FP/VE
From: Simon Guo
The mfspr/mtspr on TM SPRs(TEXASR/TFIAR/TFHAR) are non-privileged
instructions and can be executed at PR KVM guest without trapping
into host in problem state. We only emulate mtspr/mfspr
texasr/tfiar/tfhar at guest PR=0 state.
When we are emulating mtspr tm sprs at guest PR=0 st
From: Simon Guo
Currently kvmppc_handle_fac() will not update NV GPRs and thus it can
return with GUEST_RESUME.
However PR KVM guest always disables MSR_TM bit at privilege state. If PR
privilege guest are trying to read TM SPRs, it will trigger TM facility
unavailable exception and fall into kv
From: Simon Guo
Currently kernel doesn't use transaction memory.
And there is an issue for privilege guest that:
tbegin/tsuspend/tresume/tabort TM instructions can impact MSR TM bits
without trap into PR host. So following code will lead to a false mfmsr
result:
tbegin <- MSR bits update
From: Simon Guo
This patch adds support for "treclaim." emulation when PR KVM guest
executes treclaim. and traps to host.
We will firstly doing treclaim. and save TM checkpoint. Then it is
necessary to update vcpu current reg content with checkpointed vals.
When rfid into guest again, those vcpu
From: Simon Guo
This patch adds host emulation when guest PR KVM executes "trechkpt.",
which is a privileged instruction and will trap into host.
We firstly copy vcpu ongoing content into vcpu tm checkpoint
content, then perform kvmppc_restore_tm_pr() to do trechkpt.
with updated vcpu tm checkpo
From: Simon Guo
Currently PR KVM doesn't support transaction memory at guest privilege
state.
This patch adds a check at setting guest msr, so that we can never return
to guest with PR=0 and TS=0b10. A tabort will be emulated to indicate
this and fail transaction immediately.
Signed-off-by: Sim
From: Simon Guo
Currently privilege guest will be run with TM disabled.
Although the privilege guest cannot initiate a new transaction,
it can use tabort to terminate its problem state's transaction.
So it is still necessary to emulate tabort. for privilege guest.
This patch adds emulation for
From: Simon Guo
Currently guest kernel doesn't handle TAR fac unavailable and it always
runs with TAR bit on. PR KVM will lazily enable TAR. TAR is not a
frequent-use reg and it is not included in SVCPU struct.
Due to the above, the checkpointed TAR val might be a bogus TAR val.
To solve this is
From: Simon Guo
With current patch set, PR KVM now supports HTM. So this patch turns it
on for PR KVM.
Tested with:
https://github.com/justdoitqd/publicFiles/blob/master/test_kvm_htm_cap.c
Signed-off-by: Simon Guo
---
arch/powerpc/kvm/powerpc.c | 3 +--
1 file changed, 1 insertion(+), 2 delet
From: Simon Guo
Although we already have kvm_arch_vcpu_async_ioctl() which doesn't require
ioctl to load vcpu, the sync ioctl code need to be cleaned up when
CONFIG_HAVE_KVM_VCPU_ASYNC_IOCTL is not configured.
This patch moves vcpu_load/vcpu_put down to each ioctl switch case so that
each ioctl
From: Simon Guo
Due to the vcpu mutex locking/unlock has been moved out of vcpu_load()
/vcpu_put(), KVM_GET_ONE_REG and KVM_SET_ONE_REG doesn't need to do
ioctl with loading vcpu anymore. This patch removes vcpu_load()/vcpu_put()
from KVM_GET_ONE_REG and KVM_SET_ONE_REG ioctl.
Signed-off-by: Sim
From: Simon Guo
In both HV/PR KVM, the KVM_SET_REGS/KVM_GET_REGS ioctl should
be able to perform without load vcpu. This patch adds
KVM_SET_ONE_REG/KVM_GET_ONE_REG implementation to async ioctl
function.
Due to the vcpu mutex locking/unlock has been moved out of vcpu_load()
/vcpu_put(), KVM_SET_
From: Simon Guo
In both HV/PR KVM, the KVM_SET_ONE_REG/KVM_GET_ONE_REG ioctl should
be able to perform without load vcpu. This patch adds
KVM_SET_ONE_REG/KVM_GET_ONE_REG implementation to async ioctl
function.
Signed-off-by: Simon Guo
---
arch/powerpc/kvm/powerpc.c | 13 +
1 file c
From: Simon Guo
We need to migrate PR KVM during transaction and qemu will use
kvmppc_get_one_reg_pr()/kvmppc_set_one_reg_pr() APIs to get/set
transaction checkpoint state. This patch adds support for that.
So far PPC PR qemu doesn't fully function for migration but the
savevm/loadvm can be done
On Tue, Feb 27, 2018 at 05:52:06PM +0200, Andy Shevchenko wrote:
> On Tue, Feb 27, 2018 at 9:44 AM, Mathieu Malaterre wrote:
> > On Tue, Feb 27, 2018 at 8:33 AM, Christophe LEROY
> > wrote:
>
> > Much simpler is just add
> >
> > if (ARRAY_SIZE() == 0)
> > return;
>
> >> Or add
When sending TLB invalidates to the NPU we need to send extra flushes due
to a hardware issue. The original implementation would lock the all the
ATSD MMIO registers sequentially before unlocking and relocking each of
them sequentially to do the extra flush.
This introduced a deadlock as it is pos
On MCE the current code will restart the machine with
ppc_md.restart(). This case was extremely unlikely since
prior to that a skiboot call is made and that resulted in
a checkstop for analysis.
With newer skiboots, on P9 we don't checkstop the box by
default, instead we return back to the kernel
From: Simon Guo
Although CONFIG_HAVE_KVM_VCPU_ASYNC_IOCTL is usually on, logically
kvm_arch_vcpu_async_ioctl() definition should be wrapped with
CONFIG_HAVE_KVM_VCPU_ASYNC_IOCTL #ifdef.
This patch adds the #ifdef surround.
Signed-off-by: Simon Guo
---
arch/mips/kvm/mips.c | 2 ++
arch/p
On Wed, 28 Feb 2018 11:38:14 +1100
Alistair Popple wrote:
> When sending TLB invalidates to the NPU we need to send extra flushes due
> to a hardware issue. The original implementation would lock the all the
> ATSD MMIO registers sequentially before unlocking and relocking each of
> them sequenti
mmap(-1,..) is expected to search from max supported VA top down. It should find
an address above ADDR_SWITCH_HINT. Explicitly check for this.
Also derefer the address even if we failed the addr check.
Signed-off-by: Aneesh Kumar K.V
---
tools/testing/selftests/vm/va_128TBswitch.c | 27
On Tue, 27 Feb 2018 18:11:07 +0530
"Aneesh Kumar K.V" wrote:
> Nicholas Piggin writes:
>
> > On Tue, 27 Feb 2018 14:31:07 +0530
> > "Aneesh Kumar K.V" wrote:
> >
> >> Christophe Leroy writes:
> >>
> >> > The number of high slices a process might use now depends on its
> >> > address spac
On 02/28/2018 12:23 PM, Nicholas Piggin wrote:
On Tue, 27 Feb 2018 18:11:07 +0530
"Aneesh Kumar K.V" wrote:
Nicholas Piggin writes:
On Tue, 27 Feb 2018 14:31:07 +0530
"Aneesh Kumar K.V" wrote:
Christophe Leroy writes:
The number of high slices a process might use now depends on
On 28/02/2018 03:31, wei.guo.si...@gmail.com wrote:
> From: Simon Guo
>
> Although CONFIG_HAVE_KVM_VCPU_ASYNC_IOCTL is usually on, logically
> kvm_arch_vcpu_async_ioctl() definition should be wrapped with
> CONFIG_HAVE_KVM_VCPU_ASYNC_IOCTL #ifdef.
No, the symbol is defined by Kconfig. It is a b
I also noticed that the slice mask printing use wrong variables now. I
guess this should take care of it
diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
index fef3f36b0b73..6b3575c39668 100644
--- a/arch/powerpc/mm/slice.c
+++ b/arch/powerpc/mm/slice.c
@@ -535,8 +535,6 @@ unsigned
53 matches
Mail list logo