On Wed, Oct 07, 2020 at 04:42:55PM +0200, Jann Horn wrote:
> > > @@ -43,7 +43,7 @@ static inline long do_mmap2(unsigned long addr, size_t
> > > len,
> > > {
> > > long ret = -EINVAL;
> > >
> > > - if (!arch_validate_prot(prot, addr))
> > > + if (!arch_validate_prot(prot, addr, len))
On 10/8/20 4:23 AM, Oliver O'Halloran wrote:
> On Fri, Sep 25, 2020 at 7:23 PM Cédric Le Goater wrote:
>>
>> To fix an issue with PHB hotplug on pSeries machine (HPT/XIVE), commit
>> 3a3181e16fbd introduced a PPC specific pcibios_remove_bus() routine to
>> clear all interrupt mappings when the bus
+0x70/0xa0
CPU: 88 PID: 0 Comm: swapper/88 Tainted: GW
5.9.0-rc8-next-20201007 #1
Call Trace:
[c0002a4bfcf0] [c0649e98] dump_stack+0xec/0x144 (unreliable)
[c0002a4bfd30] [c00f6c34] ___might_sleep+0x2f4/0x310
[c0002a4bfdb0] [c0354f94
On Fri, Sep 25, 2020 at 7:23 PM Cédric Le Goater wrote:
>
> To fix an issue with PHB hotplug on pSeries machine (HPT/XIVE), commit
> 3a3181e16fbd introduced a PPC specific pcibios_remove_bus() routine to
> clear all interrupt mappings when the bus is removed. This routine
> frees an array allocate
On 10/7/20 1:39 AM, Jann Horn wrote:
> sparc_validate_prot() is called from do_mprotect_pkey() as
> arch_validate_prot(); it tries to ensure that an mprotect() call can't
> enable ADI on incompatible VMAs.
> The current implementation only checks that the VMA at the start address
> matches the rule
On 10/7/20 1:39 AM, Jann Horn wrote:
> arch_validate_prot() is a hook that can validate whether a given set of
> protection flags is valid in an mprotect() operation. It is given the set
> of protection flags and the address being modified.
>
> However, the address being modified can currently not
update_mask_by_l2 is called only once. But it passes cpu_l2_cache_mask
as parameter. Instead of passing cpu_l2_cache_mask, use it directly in
update_mask_by_l2.
Signed-off-by: Srikar Dronamraju
Cc: linuxppc-dev
Cc: LKML
Cc: Michael Ellerman
Cc: Nicholas Piggin
Cc: Anton Blanchard
Cc: Oliver
All threads of a SMT4/SMT8 core can either be part of CPU's coregroup
mask or outside the coregroup. Use this relation to reduce the
number of iterations needed to find all the CPUs that share the same
coregroup
Use a temporary mask to iterate through the CPUs that may share
coregroup mask. Also i
All threads of a SMT4 core can either be part of this CPU's l2-cache
mask or not related to this CPU l2-cache mask. Use this relation to
reduce the number of iterations needed to find all the CPUs that share
the same l2-cache.
Use a temporary mask to iterate through the CPUs that may share l2_cach
Move the logic for updating the coregroup mask of a CPU to its own
function. This will help in reworking the updation of coregroup mask in
subsequent patch.
Signed-off-by: Srikar Dronamraju
Cc: linuxppc-dev
Cc: LKML
Cc: Michael Ellerman
Cc: Nicholas Piggin
Cc: Anton Blanchard
Cc: Oliver O'Ha
CACHE and COREGROUP domains are now part of default topology. However on
systems that don't support CACHE or COREGROUP, these domains will
eventually be degenerated. The degeneration happens per CPU. Do note the
current fixup_topology() logic ensures that mask of a domain that is not
supported on t
Currently on hotplug/hotunplug, CPU iterates through all the CPUs in
its core to find threads in its thread group. However this info is
already captured in cpu_l1_cache_map. Hence reduce iterations and
cleanup add_cpu_to_smallcore_masks function.
Signed-off-by: Srikar Dronamraju
Tested-by: Sathee
All the arch specific topology cpumasks are within a node/DIE.
However when setting these per CPU cpumasks, system traverses through
all the online CPUs. This is redundant.
Reduce the traversal to only CPUs that are online in the node to which
the CPU belongs to.
Signed-off-by: Srikar Dronamraju
Now that cpu_core_mask has been removed and topology_core_cpumask has
been updated to use cpu_cpu_mask, we no more need
get_physical_package_id.
Signed-off-by: Srikar Dronamraju
Tested-by: Satheesh Rajendran
Cc: linuxppc-dev
Cc: LKML
Cc: Michael Ellerman
Cc: Nicholas Piggin
Cc: Anton Blancha
While offlining a CPU, system currently iterate through all the CPUs in
the DIE to clear sibling, l2_cache and smallcore maps. However if there
are more cores in a DIE, system can end up spending more time iterating
through CPUs which are completely unrelated.
Optimize this by only iterating throu
Changelog v2->v3:
v1 link:
https://lore.kernel.org/linuxppc-dev/20200921095653.9701-1-sri...@linux.vnet.ibm.com/t/#u
Use GFP_ATOMIC instead of GFP_KERNEL since allocations need to
atomic at the time of CPU HotPlug.
Reported by Qian Cai
Only changes in Patch 09 and
Anton Blanchard reported that his 4096 vcpu KVM guest took around 30
minutes to boot. He also analyzed it to the time taken to iterate while
setting the cpu_core_mask.
Further analysis shows that cpu_core_mask and cpu_cpu_mask for any CPU
would be equal on Power. However updating cpu_core_mask too
On Power, cpu_core_mask and cpu_cpu_mask refer to the same set of CPUs.
cpu_cpu_mask is needed by scheduler, hence look at deprecating
cpu_core_mask. Before deleting the cpu_core_mask, ensure its only user
is moved to cpu_cpu_mask.
Signed-off-by: Srikar Dronamraju
Tested-by: Satheesh Rajendran
C
On Wed, 2020-10-07 at 19:47 +0530, Srikar Dronamraju wrote:
> Can you confirm if CONFIG_CPUMASK_OFFSTACK is enabled in your config?
Yes, https://gitlab.com/cailca/linux-mm/-/blob/master/powerpc.config
We tested here almost daily on linux-next.
5.420023][T0] softirqs last enabled at (18074440):
> [] irq_enter_rcu+0x94/0xa0
> [ 335.420026][T0] softirqs last disabled at (18074439):
> [] irq_enter_rcu+0x70/0xa0
> [ 335.420030][T0] CPU: 88 PID: 0 Comm: swapper/88 Tainted: GW
> 5.9.0-rc8-next-
st enabled at (18074440):
[] irq_enter_rcu+0x94/0xa0
[ 335.420026][T0] softirqs last disabled at (18074439):
[] irq_enter_rcu+0x70/0xa0
[ 335.420030][T0] CPU: 88 PID: 0 Comm: swapper/88 Tainted: GW
5.9.0-rc8-next-20201007 #1
[ 335.420032][T0] Call Trace:
[ 335.42
> +++ b/arch/sparc/include/asm/mman.h
> @@ -60,31 +60,41 @@ static inline int sparc_validate_prot(unsigned long prot,
> unsigned long addr,
> if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_ADI))
> return 0;
> if (prot & PROT_ADI) {
> + struc
On Wed, Oct 07, 2020 at 09:39:31AM +0200, Jann Horn wrote:
> diff --git a/arch/powerpc/kernel/syscalls.c b/arch/powerpc/kernel/syscalls.c
> index 078608ec2e92..b1fabb97d138 100644
> --- a/arch/powerpc/kernel/syscalls.c
> +++ b/arch/powerpc/kernel/syscalls.c
> @@ -43,7 +43,7 @@ static inline long do
Make it consistent with other usages.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/book3s64/radix_pgtable.c| 7 ---
arch/powerpc/platforms/pseries/hotplug-memory.c | 13 +
2 files changed, 13 insertions(+), 7 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/rad
Similar to commit 89c140bbaeee ("pseries: Fix 64 bit logical memory block
panic")
make sure different variables tracking lmb_size are updated to be 64 bit.
Fixes: af9d00e93a4f ("powerpc/mm/radix: Create separate mappings for
hot-plugged memory")
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc
Similar to commit 89c140bbaeee ("pseries: Fix 64 bit logical memory block
panic")
make sure different variables tracking lmb_size are updated to be 64 bit.
This was found by code audit.
Cc: sta...@vger.kernel.org
Signed-off-by: Aneesh Kumar K.V
---
.../platforms/pseries/hotplug-memory.c
Similar to commit 89c140bbaeee ("pseries: Fix 64 bit logical memory block
panic")
make sure different variables tracking lmb_size are updated to be 64 bit.
This was found by code audit.
Cc: sta...@vger.kernel.org
Acked-by: Nathan Lynch
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/
Changes from v2:
* Don't use root addr and size cells during runtime. Walk up the
device tree and use the first addr and size cells value (of_n_addr_cells()/
of_n_size_cells())
Aneesh Kumar K.V (4):
powerpc/drmem: Make lmb_size 64 bit
powerpc/memhotplug: Make lmb size 64bit
powerpc/book3
OPAL interrupts kernel whenever it has new error log. Kernel calls
interrupt handler (elog_event()) to retrieve event. elog_event makes
OPAL API call (opal_get_elog_size()) to retrieve elog info.
In some case before kernel makes opal_get_elog_size() call, it gets interrupt
again. So second time wh
Every dump reported by OPAL is exported to userspace through a sysfs
interface and notified using kobject_uevent(). The userspace daemon
(opal_errd) then reads the dump and acknowledges that the dump is
saved safely to disk. Once acknowledged the kernel removes the
respective sysfs file entry causi
The inline execution path for the hardware assisted branch flush
instruction failed to set CTR to the correct value before bcctr,
causing a crash when the feature is enabled.
Fixes: 4d24e21cc694 ("powerpc/security: Allow for processors that flush the
link stack using the special bcctr")
Signed-of
31 matches
Mail list logo