On Tue, Jun 22, 2021 at 10:05:45AM +0530, Bharata B Rao wrote:
> On Mon, Jun 21, 2021 at 10:12:42AM -0700, Nathan Chancellor wrote:
> > I have not seen this reported yet so apologies if it has and there is a
> > fix I am missing:
> >
> > arch/powerpc/kvm/book3s_hv_nested.c:1334:11: error: variable
While booting 5.13.0-rc7-next-20210621 on a PowerVM LPAR following warning
is seen
[ 30.922154] [ cut here ]
[ 30.922201] cfs_rq->avg.load_avg || cfs_rq->avg.util_avg ||
cfs_rq->avg.runnable_avg
[ 30.922219] WARNING: CPU: 6 PID: 762 at kernel/sched/fair.c:3277
updat
Hi Sachin,
On Tue, 22 Jun 2021 at 09:39, Sachin Sant wrote:
>
> While booting 5.13.0-rc7-next-20210621 on a PowerVM LPAR following warning
> is seen
>
> [ 30.922154] [ cut here ]
> [ 30.922201] cfs_rq->avg.load_avg || cfs_rq->avg.util_avg ||
> cfs_rq->avg.runnable_avg
Excerpts from Christophe Leroy's message of June 22, 2021 4:47 pm:
>
>
> Le 22/06/2021 à 08:04, Nicholas Piggin a écrit :
>> The PPC_RFI_SRR_DEBUG check added by patch "powerpc/64s: avoid reloading
>> (H)SRR registers if they are still valid" has a few deficiencies. It
>> does not fix the actual
Le 22/06/2021 à 10:54, Nicholas Piggin a écrit :
Excerpts from Christophe Leroy's message of June 22, 2021 4:47 pm:
Le 22/06/2021 à 08:04, Nicholas Piggin a écrit :
The PPC_RFI_SRR_DEBUG check added by patch "powerpc/64s: avoid reloading
(H)SRR registers if they are still valid" has a few
Jordan Niethe writes:
> From: Christophe Leroy
>
> This reuses the DEBUG_PAGEALLOC logic.
>
> Tested with CONFIG_KFENCE + CONFIG_KUNIT + CONFIG_KFENCE_KUNIT_TEST on
> radix and hash.
>
> Signed-off-by: Christophe Leroy
> [jpn: Handle radix]
> Signed-off-by: Jordan Niethe
> ---
> arch/powerpc/K
Excerpts from Nathan Chancellor's message of June 22, 2021 4:24 am:
> LLVM does not emit optimal byteswap assembly, which results in high
> stack usage in kvmhv_enter_nested_guest() due to the inlining of
> byteswap_pt_regs(). With LLVM 12.0.0:
>
> arch/powerpc/kvm/book3s_hv_nested.c:289:6: error:
This series applies to powerpc topic/ppc-kvm branch (KVM Cify
series in particular), plus "KVM: PPC: Book3S HV Nested: Reflect L2 PMU
in-use to L0 when L2 SPRs are live" posted to kvm-ppc.
This reduces radix guest full entry/exit latency on POWER9 and POWER10
by almost 2x (hash is similar but it's
This register is not architected and not implemented in POWER9 or 10,
it just reads back zeroes for compatibility.
-78 cycles (9255) cycles POWER9 virt-mode NULL hcall
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kvm/book3s_hv.c | 3 ---
arch/powerpc/platforms/powernv/idle.c | 2 --
The host Linux timer code arms the decrementer with the value
'decrementers_next_tb - current_tb' using set_dec(), which stores
val - 1 on Book3S-64, which is not quite the same as what KVM does
to re-arm the host decrementer when exiting the guest.
This shouldn't be a significant change, but it m
There is no need to save away the host DEC value, as it is derived
from the host timer subsystem which maintains the next timer time,
so it can be restored from there.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/include/asm/time.h | 5 +
arch/powerpc/kernel/time.c | 1 +
arch/powe
On processors that don't suppress the HDEC exceptions when LPCR[HDICE]=0,
this could help reduce needless guest exits due to leftover exceptions on
entering the guest.
Reviewed-by: Alexey Kardashevskiy
Signed-off-by: Nicholas Piggin
---
arch/powerpc/include/asm/time.h | 2 ++
arch/powerpc
mftb is serialising (dispatch next-to-complete) so it is heavy weight
for a mfspr. Avoid reading it multiple times in the entry or exit paths.
A small number of cycles delay to timers is tolerable.
-118 cycles (9137) POWER9 virt-mode NULL hcall
Reviewed-by: Fabiano Rosas
Signed-off-by: Nicholas
Rather than have KVM look up the host timer and fiddle with the
irq-work internal details, have the powerpc/time.c code provide a
function for KVM to re-arm the Linux timer code when exiting a
guest.
This is implementation has an improvement over existing code of
marking a decrementer interrupt as
HV interrupts may be taken with the MMU enabled when radix guests are
running. Enable LPCR[HAIL] on ISA v3.1 processors for radix guests.
Make this depend on the host LPCR[HAIL] being enabled. Currently that is
always enabled, but having this test means any issue that might require
LPCR[HAIL] to be
This register controls supervisor SPR modifications, and as such is only
relevant for KVM. KVM always sets AMOR to ~0 on guest entry, and never
restores it coming back out to the host, so it can be kept constant and
avoid the mtSPR in KVM guest entry.
-21 cycles (9116) cycles POWER9 virt-mode NULL
Revert the workaround added by commit 63279eeb7f93a ("KVM: PPC: Book3S
HV: Always save guest pmu for guest capable of nesting").
Nested capable guests running with the earlier commit ("KVM: PPC: Book3S
HV Nested: Indicate guest PMU in-use in VPA") will now indicate the PMU
in-use status of their g
KVM PMU management code looks for particular frozen/disabled bits in
the PMU registers so it knows whether it must clear them when coming
out of a guest or not. Setting this up helps KVM make these optimisations
without getting confused. Longer term the better approach might be to
move guest/host P
Implement the P9 path PMU save/restore code in C, and remove the
POWER9/10 code from the P7/8 path assembly.
-449 cycles (8533) POWER9 virt-mode NULL hcall
Signed-off-by: Nicholas Piggin
---
arch/powerpc/include/asm/asm-prototypes.h | 5 -
arch/powerpc/kvm/book3s_hv.c | 205 +
Factor duplicated code into a helper function.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kvm/book3s_hv.c | 24
1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index b1b94b3563b7..38d8afa168
Rather than guest/host save/retsore functions, implement context switch
functions that take care of details like the VPA update for nested.
The reason to split these kind of helpers into explicit save/load
functions is mainly to schedule SPR access nicely, but PMU is a special
case where the load
The pmcregs_in_use field in the guest VPA can not be trusted to reflect
what the guest is doing with PMU SPRs, so the PMU must always be managed
(stopped) when exiting the guest, and SPR values set when entering the
guest to ensure it can't cause a covert channel or otherwise cause other
guests or
Processors that support KVM HV do not require read-modify-write of
the CTRL SPR to set/clear their thread's runlatch. Just write 1 or 0
to it.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kvm/book3s_hv.c| 2 +-
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 15 ++-
2 files
Move the SPR update into its relevant helper function. This will
help with SPR scheduling improvements in later changes.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kvm/book3s_hv.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/pow
This reduces the number of mtmsrd required to enable facility bits when
saving/restoring registers, by having the KVM code set all bits up front
rather than using individual facility functions that set their particular
MSR bits.
-42 cycles (7803) POWER9 virt-mode NULL hcall
Signed-off-by: Nichola
Moving the mtmsrd after the host SPRs are saved and before the guest
SPRs start to be loaded can prevent an SPR scoreboard stall (because
the mtmsrd is L=1 type which does not cause context synchronisation.
This is also now more convenient to combined with the mtmsrd L=0
instruction to enable faci
Small cleanup makes it a bit easier to match up entry and exit
operations.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kvm/book3s_hv.c | 11 +--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index b8b0695a931
Change dec_expires to be relative to the guest timebase, and allow
it to be moved into low level P9 guest entry functions, to improve
SPR access scheduling.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/include/asm/kvm_book3s.h | 6 +++
arch/powerpc/include/asm/kvm_host.h | 2 +-
arch/
Move the TB updates between saving and loading guest and host SPRs,
to improve scheduling by keeping issue-NTC operations together as
much as possible.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kvm/book3s_hv_p9_entry.c | 36 +--
1 file changed, 18 insertions(+), 18
Reduce the number of mfTB executed by passing the current timebase
around entry and exit code rather than read it multiple times.
-213 cycles (7578) POWER9 virt-mode NULL hcall
Signed-off-by: Nicholas Piggin
---
arch/powerpc/include/asm/kvm_book3s_64.h | 2 +-
arch/powerpc/kvm/book3s_hv.c
Keep better track of the current SPR value in places where
they are to be loaded with a new context, to reduce expensive
mtSPR operations.
-73 cycles (7354) POWER9 virt-mode NULL hcall
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kvm/book3s_hv.c | 64 ++--
1 f
This juggles SPR switching on the entry and exit sides to be more
symmetric, which makes the next refactoring patch possible with no
functional change.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kvm/book3s_hv.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch
Avoid interleaving mfSPR and mtSPR.
-151 cycles (7427) POWER9 virt-mode NULL hcall
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kvm/book3s_hv.c | 8
arch/powerpc/kvm/book3s_hv_p9_entry.c | 19 +++
2 files changed, 15 insertions(+), 12 deletions(-)
diff --g
This should be no functional difference but makes the caller easier
to read.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kvm/book3s_hv.c | 55 +---
1 file changed, 33 insertions(+), 22 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/
Move the P9 guest/host register switching functions to the built-in
P9 entry code, and export it for nested to use as well.
This allows more flexibility in scheduling these supervisor privileged
SPR accesses with the HV privileged and PR SPR accesses in the low level
entry code.
Signed-off-by: Ni
This is just refactoring.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kvm/book3s_hv.c | 125 +++
1 file changed, 67 insertions(+), 58 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index a7660af22161..64386fc0cd00 100644
Move register saving and loading from kvmhv_p9_guest_entry() into the HV
and nested entry handlers.
Accesses are scheduled to reduce mtSPR / mfSPR interleaving which
reduces SPR scoreboard stalls.
XXX +212 cycles here somewhere (7566), investigate POWER9 virt-mode NULL hcall
Signed-off-by: Nich
If TM is not active, only TM register state needs to be saved.
-348 cycles (7218) POWER9 virt-mode NULL hcall
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kvm/book3s_hv_p9_entry.c | 24
1 file changed, 20 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/kvm/b
This moves PMU switch to guest as late as possible in entry, and switch
back to host as early as possible at exit. This helps the host get the
most perf coverage of KVM entry/exit code as possible.
This is slightly suboptimal for SPR scheduling point of view when the
PMU is enabled, but when perf
Use CPU_FTR_P9_RADIX_PREFETCH_BUG for this, to test for DD2.1 and below
processors.
-43 cycles (7178) POWER9 virt-mode NULL hcall
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kvm/book3s_hv.c | 3 ++-
arch/powerpc/kvm/book3s_hv_p9_entry.c | 6 --
2 files changed, 6 insertions(+),
This avoids more scoreboard stalls and reduces mtSPRs.
-193 cycles (6985) POWER9 virt-mode NULL hcall
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kvm/book3s_hv_p9_entry.c | 67 ---
1 file changed, 40 insertions(+), 27 deletions(-)
diff --git a/arch/powerpc/kvm/book3
Use HFSCR facility disabling to implement demand faulting for EBB, with
a hysteresis counter similar to the load_fp etc counters in context
switching that implement the equivalent demand faulting for userspace
facilities.
This speeds up guest entry/exit by avoiding the register save/restore
when a
Use HFSCR facility disabling to implement demand faulting for TM, with
a hysteresis counter similar to the load_fp etc counters in context
switching that implement the equivalent demand faulting for userspace
facilities.
This speeds up guest entry/exit by avoiding the register save/restore
when a
Linux implements SPR save/restore including storage space for registers
in the task struct for process context switching. Make use of this
similarly to the way we make use of the context switching fp/vec save
restore.
This improves code reuse, allows some stack space to be saved, and helps
with av
Tighten up partition switching code synchronisation and comments.
In particular, hwsync ; isync is required after the last access that is
performed in the context of a partition, before the partition is
switched away from.
-301 cycles (6319) POWER9 virt-mode NULL hcall
Signed-off-by: Nicholas Pi
Some of the DAWR SPR access is already predicated on dawr_enabled(),
apply this to the remainder of the accesses.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kvm/book3s_hv_p9_entry.c | 34 ---
1 file changed, 20 insertions(+), 14 deletions(-)
diff --git a/arch/powerp
This also moves the PSSCR update in nested entry to avoid a SPR
scoreboard stall.
-45 cycles (6276) POWER9 virt-mode NULL hcall
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kvm/book3s_hv.c | 7 +--
arch/powerpc/kvm/book3s_hv_p9_entry.c | 26 +++---
2 files c
Use the existing TLB flushing logic to IPI the previous CPU and run the
necessary barriers before running a guest vCPU on a new physical CPU,
to do the necessary radix GTSE barriers for handling the case of an
interrupted guest tlbie sequence.
This results in more IPIs than the TLB flush logic req
mftb() is expensive and one can be avoided on nested guest dispatch.
If the time checking code distinguishes between the L0 timer and the
nested HV timer, then both can be tested in the same place with the
same mftb() value.
This also nicely illustrates the relationship between the L0 and nested
Rearrange the MSR saving on entry so it does not follow the mtmsrd to
disable interrupts, avoiding a possible RAW scoreboard stall.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/include/asm/kvm_book3s_64.h | 2 +
arch/powerpc/kvm/book3s_hv.c | 18 ++-
arch/powerpc/kvm/book3s_h
slbmfee/slbmfev instructions are very expensive, moreso than a regular
mfspr instruction, so minimising them significantly improves hash guest
exit performance. The slbmfev is only required if slbmfee found a valid
SLB entry.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kvm/book3s_hv_p9_entry
Daniel Henrique Barboza writes:
> On 6/17/21 1:51 PM, Aneesh Kumar K.V wrote:
>> PAPR interface currently supports two different ways of communicating
>> resource
>> grouping details to the OS. These are referred to as Form 0 and Form 1
>> associativity grouping. Form 0 is the older format and i
On Wed, Jun 16, 2021 at 04:43:03PM +0300, Andy Shevchenko wrote:
> Parse to and export from UUID own type, before dereferencing.
> This also fixes wrong comment (Little Endian UUID is something else)
> and should eliminate the direct strict types assignments.
Any comments on this version? Can it b
On Tue, Jun 22, 2021 at 03:44:56PM +0300, Andy Shevchenko wrote:
> On Wed, Jun 16, 2021 at 04:43:03PM +0300, Andy Shevchenko wrote:
> > Parse to and export from UUID own type, before dereferencing.
> > This also fixes wrong comment (Little Endian UUID is something else)
> > and should eliminate the
On Thu, Jun 17, 2021 at 06:56:13PM +0530, Kajol Jain wrote:
> ---
> Kajol Jain (4):
> drivers/nvdimm: Add nvdimm pmu structure
> drivers/nvdimm: Add perf interface to expose nvdimm performance stats
> powerpc/papr_scm: Add perf interface support
> powerpc/papr_scm: Document papr_scm sysfs e
The function is counting reserved LMBs as available to be added, but
they aren't. This will cause the function to miscalculate the available
LMBs and can trigger errors later on when executing dlpar_add_lmb().
Signed-off-by: Daniel Henrique Barboza
---
arch/powerpc/platforms/pseries/hotplug-memo
Hi,
These are a couple of cleanups for the dlpar_memory_add* functions
that are similar to those I did a month or so ago in
dlpar_memory_remove_by_count and dlpar_memory_remove_by_ic.
Daniel Henrique Barboza (3):
powerpc/pseries: skip reserved LMBs in dlpar_memory_add_by_count()
powerpc/ps
After a successful dlpar_add_lmb() call the LMB is marked as reserved.
Later on, depending whether we added enough LMBs or not, we rely on
the marked LMBs to see which ones might need to be removed, and we
remove the reservation of all of them.
These are done in for_each_drmem_lmb() loops without
The validation done at the start of dlpar_memory_add_by_ic() is an all
of nothing scenario - if any LMBs in the range is marked as RESERVED we
can fail right away.
We then can remove the 'lmbs_available' var and its check with
'lmbs_to_add' since the whole LMB range was already validated in the
pr
Le mardi 22 juin 2021 à 09:49:31 (+0200), Vincent Guittot a écrit :
> Hi Sachin,
>
> On Tue, 22 Jun 2021 at 09:39, Sachin Sant wrote:
> >
> > While booting 5.13.0-rc7-next-20210621 on a PowerVM LPAR following warning
> > is seen
> >
> > [ 30.922154] [ cut here ]
> > [
Paolo Bonzini writes:
> On 22/06/21 07:25, Stephen Rothwell wrote:
>> Hi all,
>>
>> Today's linux-next merge of the kvm tree got a conflict in:
>>
>>include/uapi/linux/kvm.h
>>
>> between commit:
>>
>>9bb4a6f38fd4 ("KVM: PPC: Book3S HV: Add KVM_CAP_PPC_RPT_INVALIDATE
>> capability")
>
>> On Tue, 22 Jun 2021 at 09:39, Sachin Sant wrote:
>>>
>>> While booting 5.13.0-rc7-next-20210621 on a PowerVM LPAR following warning
>>> is seen
>>>
>>> [ 30.922154] [ cut here ]
>>> [ 30.922201] cfs_rq->avg.load_avg || cfs_rq->avg.util_avg ||
>>> cfs_rq->avg.runna
On 6/22/21 9:07 AM, Aneesh Kumar K.V wrote:
Daniel Henrique Barboza writes:
On 6/17/21 1:51 PM, Aneesh Kumar K.V wrote:
PAPR interface currently supports two different ways of communicating resource
grouping details to the OS. These are referred to as Form 0 and Form 1
associativity groupi
On 22/06/21 16:51, Michael Ellerman wrote:
Please drop the patches at
https://www.spinics.net/lists/kvm-ppc/msg18666.html from the powerpc
tree, and merge them through either the kvm-powerpc or kvm trees.
The kvm-ppc tree is not taking patches at the moment.
If so, let's remove the "T" entry
On 6/20/21 11:49 PM, Michael Ellerman wrote:
> Pass the value of linux_banner to firmware via option vector 7.
>
> Option vector 7 is described in "LoPAR" Linux on Power Architecture
> Reference v2.9, in table B.7 on page 824:
>
> An ASCII character formatted null terminated string that describ
On 6/21/21 9:11 PM, Michael Ellerman wrote:
> Daniel Axtens writes:
>> Hi
>>
>>> -static char __init *prom_strcpy(char *dest, const char *src)
>>> +static ssize_t __init prom_strscpy_pad(char *dest, const char *src, size_t
>>> n)
>>> {
>>> - char *tmp = dest;
>>> + ssize_t rc;
>>> + size_t
On Sat, 19 Jun 2021, Claire Chang wrote:
> Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct
> initialization to make the code reusable.
>
> Signed-off-by: Claire Chang
> Reviewed-by: Christoph Hellwig
> Tested-by: Stefano Stabellini
> Tested-by: Will Deacon
Acked-by: Ste
Tyrel Datwyler writes:
> On 6/20/21 11:49 PM, Michael Ellerman wrote:
>> Pass the value of linux_banner to firmware via option vector 7.
>>
>> Option vector 7 is described in "LoPAR" Linux on Power Architecture
>> Reference v2.9, in table B.7 on page 824:
>>
>> An ASCII character formatted nul
PowerVM will not arbitrarily oversubscribe or stop guests, page out the
guest kernel text to a NFS volume connected by carrier pigeon to abacus
based storage, etc., as a KVM host might. So PowerVM guests are not
likely to be killed by the hard lockup watchdog in normal operation,
even with shared p
The caller has been moved to C after irq soft-mask state has been
reconciled, and Linux irqs have been marked as disabled, so this
does not have to play games with irq internals.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kernel/time.c | 11 ---
1 file changed, 11 deletions(-)
diff
ISA v2.06 (POWER7 and up) as well as e6500 support lbarx and lharx.
Add a compile option that allows code to use it, and add support in
cmpxchg and xchg 8 and 16 bit values without shifting and masking.
Signed-off-by: Nicholas Piggin
---
v2: Fixed lwarx->lharx typo, switched to PPC_HAS_
arch/po
32-bit platforms don't have irq soft masking.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/Kconfig.debug | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/Kconfig.debug b/arch/powerpc/Kconfig.debug
index 6342f9da4545..45d871fb9155 100644
--- a/arch/powerpc/Kconfig.debug
+++ b/a
rpc/sysdev/ehv_pic.c:111:5: error: no previous prototype for function
'ehv_pic_set_irq_type' [-Werror,-Wmissing-prototypes]
Error/Warning ids grouped by kconfigs:
clang_recent_errors
|-- powerpc-randconfig-r005-20210622
| |--
arch-powerpc-platforms-52xx-mpc52xx_pm.c:error:stack-fra
onfig-a001-20210622
i386 randconfig-a002-20210622
i386 randconfig-a003-20210622
i386 randconfig-a006-20210622
i386 randconfig-a005-20210622
i386 randconfig-a004-20210622
x86_64 randconfig-a012-20210
printk_safe_flush_on_panic() has special lock breaking code for the
case where we panic()ed with the console lock held. It relies on
panic IPI causing other CPUs to mark themselves offline.
Do as most other architectures do.
This effectively reverts commit de6e5d38417e ("powerpc: smp_send_stop do
Hi Rafael,
These are based on your patch [1] now.
commit 367dc4aa932b ("cpufreq: Add stop CPU callback to cpufreq_driver
interface") added the stop_cpu() callback to allow the drivers to do
clean up before the CPU is completely down and its state can't be
modified.
At that time the CPU hotplug f
commit 367dc4aa932b ("cpufreq: Add stop CPU callback to cpufreq_driver
interface") added the stop_cpu() callback to allow the drivers to do
clean up before the CPU is completely down and its state can't be
modified.
At that time the CPU hotplug framework used to call the cpufreq core's
registered
-a001-20210622
i386 randconfig-a002-20210622
i386 randconfig-a003-20210622
i386 randconfig-a006-20210622
i386 randconfig-a005-20210622
i386 randconfig-a004-20210622
x86_64 randconfig-a012-20210622
x86_64
On PowerVM, the hypervisor defines the maximum buffer length for
each NX request and the kernel exported this value via sysfs.
This patch reads this value if the sysfs entry is available and
is used to limit the request length.
Signed-off-by: Haren Myneni
---
.../testing/selftests/powerpc/nx-
From: Naveen N. Rao
Trying to use a kprobe on ppc32 results in the below splat:
BUG: Unable to handle kernel data access on read at 0x7c0802a6
Faulting instruction address: 0xc002e9f0
Oops: Kernel access of bad area, sig: 11 [#1]
BE PAGE_SIZE=4K PowerPC 44x Platform
Modules li
Viresh Kumar writes:
>
> Subject: Re: [PATCH V4 3/4] cpufreq: powerenv: Migrate to ->exit() callback
> instead of ->stop_cpu()
Typo in subject should be "powernv".
cheers
commit 367dc4aa932b ("cpufreq: Add stop CPU callback to cpufreq_driver
interface") added the stop_cpu() callback to allow the drivers to do
clean up before the CPU is completely down and its state can't be
modified.
At that time the CPU hotplug framework used to call the cpufreq core's
registered
On 23-06-21, 15:45, Michael Ellerman wrote:
> Viresh Kumar writes:
> >
> > Subject: Re: [PATCH V4 3/4] cpufreq: powerenv: Migrate to ->exit() callback
> > instead of ->stop_cpu()
>
> Typo in subject should be "powernv".
Thanks for noticing it :)
--
viresh
Andy Shevchenko writes:
> On Wed, Jun 16, 2021 at 04:43:03PM +0300, Andy Shevchenko wrote:
>> Parse to and export from UUID own type, before dereferencing.
>> This also fixes wrong comment (Little Endian UUID is something else)
>> and should eliminate the direct strict types assignments.
>
> Any c
Hi all,
Today's linux-next merge of the kvm-arm tree got a conflict in:
include/uapi/linux/kvm.h
between commits:
b87cc116c7e1 ("KVM: PPC: Book3S HV: Add KVM_CAP_PPC_RPT_INVALIDATE
capability")
644f706719f0 ("KVM: x86: hyper-v: Introduce KVM_CAP_HYPERV_ENFORCE_CPUID")
0dbb11230437 ("KV
Bharata B Rao writes:
> On Tue, Jun 22, 2021 at 10:05:45AM +0530, Bharata B Rao wrote:
>> On Mon, Jun 21, 2021 at 10:12:42AM -0700, Nathan Chancellor wrote:
>> > I have not seen this reported yet so apologies if it has and there is a
>> > fix I am missing:
>> >
>> > arch/powerpc/kvm/book3s_hv_nes
86 matches
Mail list logo