+void __weak arch_teardown_msi_irqs(struct pci_dev *dev)
+{
+ struct msi_desc *desc;
+ int i;
+
+ for_each_pci_msi_entry(desc, dev) {
+ if (desc->irq) {
+ for (i = 0; i < entry->nvec_used; i++)
I guess this is 'desc' ?
Thanks,
C.
+
On 11/27/21 02:18, Thomas Gleixner wrote:
Remove the kobject.h include from msi.h as it's not required and add a
sysfs.h include to the core code instead.
Signed-off-by: Thomas Gleixner
This patch breaks compile on powerpc :
CC arch/powerpc/kernel/msi.o
In file included from ../arch/
On Sun, Nov 28, 2021 at 07:03:41PM +0100, mirq-t...@rere.qmqm.pl wrote:
> On Sat, Nov 27, 2021 at 07:56:55PM -0800, Yury Norov wrote:
> > In many cases people use bitmap_weight()-based functions like this:
> >
> > if (num_present_cpus() > 1)
> > do_something();
> >
> > This may ta
Le 27/11/2021 à 19:10, Christophe Leroy a écrit :
copy_inst_from_kernel_nofault() uses copy_from_kernel_nofault() to
copy one or two 32bits words. This means calling an out-of-line
function which itself calls back copy_from_kernel_nofault_allowed()
then performs a generic copy with loops.
Rew
With recent version of linux-next builds on IBM Power servers,
depending on file system used I have observed the file system is either
mounted read-only(ext4) or kernel fails to reach login prompt (with XFS).
In both cases following messages are seen during boot:
[4.379474] sd 0:0:1:0: [sda] t
https://bugzilla.kernel.org/show_bug.cgi?id=205099
--- Comment #44 from Christophe Leroy (christophe.le...@csgroup.eu) ---
Interesting.
I wonder why it works with OUTLINE KASAN but not with INLINE KASAN.
Can you activate CONFIG_PTDUMP_DEBUGFS and provide the content of
/sys/kernel/debug/kernel_p
Allow the LPID bit width and partition table size to be set at runtime
from the device tree.
Move the PID bit width detection into the same place.
KVM does not support using the extra bits yet, this is mainly required
to get the PTCR register values correct (so KVM will run but it will
not alloca
Microwatt implements a subset of ISA v3.0 (which is equivalent to
the POWER9_CPU option). It is radix-only, so does not require hash
MMU support.
This saves 20kB compressed dtbImage and 56kB vmlinux size.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/configs/microwatt_defconfig | 3 ++-
arch/
Compiling out hash support code when CONFIG_PPC_64S_HASH_MMU=n saves
128kB kernel image size (90kB text) on powernv_defconfig minus KVM,
350kB on pseries_defconfig minus KVM, 40kB on a tiny config.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/Kconfig | 2 +-
arch/pow
This adds Kconfig selection which allows 64s hash MMU support to be
disabled. It can be disabled if radix support is enabled, the minimum
supported CPU type is POWER9 (or higher), and KVM is not selected.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/Kconfig | 3 ++-
arch/
There are a few places that require MMU_FTR_HPTE_TABLE to be set even
when running in radix mode. Fix those up.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/mm/pgtable.c | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pg
mmu_linear_psize is only set at boot once on 64e, is not necessarily
the correct size of the linear map pages, and is never used anywhere.
Remove it.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/include/asm/nohash/mmu-book3e.h | 1 -
arch/powerpc/mm/nohash/tlb.c | 9 -
memremap_compat_align is only relevant when ZONE_DEVICE is selected.
ZONE_DEVICE depends on ARCH_HAS_PTE_DEVMAP, which is only selected
by PPC_BOOK3S_64.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/Kconfig | 2 +-
arch/powerpc/mm/book3s64/pgtable.c | 20
a
Radix never sets mmu_linear_psize so it's always 4K, which causes pcpu
atom_size to always be PAGE_SIZE. 64e sets it to 1GB always.
Make paths for these platforms to be explicit about what value they set
atom_size to.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kernel/setup_64.c | 21 ++
This file contains functions and data common to radix, so rename it to
remove the hash_ prefix.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/mm/book3s64/Makefile | 2 +-
arch/powerpc/mm/book3s64/{hash_hugetlbpage.c => hugetlbpage.c} | 0
2 files changed, 1 inserti
The radix code uses some of the psize variables. Move the common
ones from hash_utils.c to pgtable.c.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/mm/book3s64/hash_utils.c | 5 -
arch/powerpc/mm/book3s64/pgtable.c| 7 +++
2 files changed, 7 insertions(+), 5 deletions(-)
diff --gi
The radix test can exclude slb_flush_all_realmode() from being called
because flush_and_reload_slb() is only expected to flush ERAT when
called by flush_erat(), which is only on pre-ISA v3.0 CPUs that do not
support radix.
This helps the later change to make hash support configurable to not
introd
In preparation for making hash MMU support configurable, move THP
trace point function definitions out of an otherwise hash-specific
file.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/mm/book3s64/Makefile | 2 +-
arch/powerpc/mm/book3s64/hash_pgtable.c | 1 -
arch/powerpc/mm/book3s64/tr
This avoids a change in behaviour in the later patch making hash
support configurable. This is possibly a user interface change, so
the alternative would be a hard-coded slb_size=0 here.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/platforms/pseries/lparcfg.c | 3 ++-
1 file changed, 2 insert
This reduces ifdefs in a later change which makes hash support configurable.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/platforms/pseries/lpar.c | 56 +--
1 file changed, 28 insertions(+), 28 deletions(-)
diff --git a/arch/powerpc/platforms/pseries/lpar.c
b/arch/po
slb.c is hash-specific SLB management, but do_bad_slb_fault deals with
segment interrupts that occur with radix MMU as well.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/include/asm/interrupt.h | 2 +-
arch/powerpc/kernel/exceptions-64s.S | 4 ++--
arch/powerpc/mm/book3s64/slb.c | 16
The pseries platform does not use the native hash code but the PAPR
virtualised hash interfaces, so remove PPC_HASH_MMU_NATIVE.
This requires moving tlbiel code from hash_native.c to hash_utils.c.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/include/asm/book3s/64/tlbflush.h | 4 -
arch/pow
PPC_NATIVE now only controls the native HPT code, so rename it to be
more descriptive. Restrict it to Book3S only.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/mm/book3s64/Makefile | 2 +-
arch/powerpc/mm/book3s64/hash_utils.c | 2 +-
arch/powerpc/platforms/52xx/Kconfig|
Now that there's a platform that can make good use of it, here's
a series that can prevent the hash MMU code being built for 64s
platforms that don't need it.
Since v4:
- Fix 32s allnoconfig compile found by kernel test robot.
- Fix 64s platforms like cell that require hash but allow POWER9 CPU
FW_FEATURE_NATIVE_ALWAYS and FW_FEATURE_NATIVE_POSSIBLE are always
zero and never do anything. Remove them.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/include/asm/firmware.h | 8
1 file changed, 8 deletions(-)
diff --git a/arch/powerpc/include/asm/firmware.h
b/arch/powerpc/includ
On Mon, Nov 29, 2021 at 10:51:18AM +0800, Kefeng Wang wrote:
> Hi Dennis and all maintainers, any comments about the changes, many thanks.
>
> On 2021/11/21 17:35, Kefeng Wang wrote:
> > When support page mapping percpu first chunk allocator on arm64, we
> > found there are lots of duplicated code
On Sun, Nov 28, 2021 at 09:08:41PM +1000, Nicholas Piggin wrote:
> Excerpts from Yury Norov's message of November 28, 2021 1:56 pm:
> > In many cases people use bitmap_weight()-based functions like this:
> >
> > if (num_present_cpus() > 1)
> > do_something();
> >
> > This may take
https://bugzilla.kernel.org/show_bug.cgi?id=205099
--- Comment #42 from Erhard F. (erhar...@mailbox.org) ---
Created attachment 299761
--> https://bugzilla.kernel.org/attachment.cgi?id=299761&action=edit
dmesg (5.15.5, INLINE KASAN, LOWMEM_SIZE=0x2000, PowerMac G4 DP)
--
You may reply to t
https://bugzilla.kernel.org/show_bug.cgi?id=205099
Erhard F. (erhar...@mailbox.org) changed:
What|Removed |Added
Attachment #299715|0 |1
is obsolete|
https://bugzilla.kernel.org/show_bug.cgi?id=205099
Erhard F. (erhar...@mailbox.org) changed:
What|Removed |Added
Attachment #299711|0 |1
is obsolete|
29.11.2021 00:17, Michał Mirosław пишет:
>> I'm having trouble with parsing this comment. Could you please try to
>> rephrase it? I don't see how you could check whether power-off handler
>> is available if you'll mix all handlers together.
> If notify_call_chain() would be fixed to return NOTIFY_O
28.11.2021 04:15, Michał Mirosław пишет:
> On Fri, Nov 26, 2021 at 09:00:54PM +0300, Dmitry Osipenko wrote:
>> Kernel now supports chained power-off handlers. Use do_kernel_power_off()
>> that invokes chained power-off handlers. It also invokes legacy
>> pm_power_off() for now, which will be remove
28.11.2021 03:28, Michał Mirosław пишет:
> On Fri, Nov 26, 2021 at 09:00:41PM +0300, Dmitry Osipenko wrote:
>> Add sanity check which ensures that there are no two restart handlers
>> registered with the same priority. Normally it's a direct sign of a
>> problem if two handlers use the same priorit
28.11.2021 04:23, Michał Mirosław пишет:
> On Fri, Nov 26, 2021 at 09:00:58PM +0300, Dmitry Osipenko wrote:
>> Replace legacy pm_power_off with kernel_can_power_off() helper that
>> is aware about chained power-off handlers.
>>
>> Signed-off-by: Dmitry Osipenko
>> ---
>> drivers/memory/emif.c | 2
28.11.2021 03:43, Michał Mirosław пишет:
> On Fri, Nov 26, 2021 at 09:00:44PM +0300, Dmitry Osipenko wrote:
>> SoC platforms often have multiple ways of how to perform system's
>> power-off and restart operations. Meanwhile today's kernel is limited to
>> a single option. Add combined power-off+res
On Sun, Nov 28, 2021 at 12:54:00PM -0500, Dennis Zhou wrote:
> Hello,
>
> On Sun, Nov 28, 2021 at 09:43:20AM -0800, Yury Norov wrote:
> > On Sun, Nov 28, 2021 at 09:07:52AM -0800, Joe Perches wrote:
> > > On Sat, 2021-11-27 at 19:57 -0800, Yury Norov wrote:
> > > > Add num_{possible,present,active
On Sun, 2021-11-28 at 09:43 -0800, Yury Norov wrote:
> On Sun, Nov 28, 2021 at 09:07:52AM -0800, Joe Perches wrote:
> > On Sat, 2021-11-27 at 19:57 -0800, Yury Norov wrote:
> > > Add num_{possible,present,active}_cpus_{eq,gt,le} and replace num_*_cpus()
> > > with one of new functions where appropr
On Sun, 28 Nov 2021 at 18:43, Yury Norov wrote:
> On Sun, Nov 28, 2021 at 09:07:52AM -0800, Joe Perches wrote:
> > On Sat, 2021-11-27 at 19:57 -0800, Yury Norov wrote:
> > > Add num_{possible,present,active}_cpus_{eq,gt,le} and replace num_*_cpus()
> > > with one of new functions where appropriate
Hello,
On Sun, Nov 28, 2021 at 09:43:20AM -0800, Yury Norov wrote:
> On Sun, Nov 28, 2021 at 09:07:52AM -0800, Joe Perches wrote:
> > On Sat, 2021-11-27 at 19:57 -0800, Yury Norov wrote:
> > > Add num_{possible,present,active}_cpus_{eq,gt,le} and replace num_*_cpus()
> > > with one of new function
On Sun, Nov 28, 2021 at 09:07:52AM -0800, Joe Perches wrote:
> On Sat, 2021-11-27 at 19:57 -0800, Yury Norov wrote:
> > Add num_{possible,present,active}_cpus_{eq,gt,le} and replace num_*_cpus()
> > with one of new functions where appropriate. This allows num_*_cpus_*()
> > to return earlier depend
On Sat, 2021-11-27 at 19:57 -0800, Yury Norov wrote:
> Add num_{possible,present,active}_cpus_{eq,gt,le} and replace num_*_cpus()
> with one of new functions where appropriate. This allows num_*_cpus_*()
> to return earlier depending on the condition.
[]
> diff --git a/arch/arc/kernel/smp.c b/arch/
Excerpts from Yury Norov's message of November 28, 2021 1:56 pm:
> In many cases people use bitmap_weight()-based functions like this:
>
> if (num_present_cpus() > 1)
> do_something();
>
> This may take considerable amount of time on many-cpus machines because
> num_present_cp
On Sat, Nov 27, 2021 at 07:56:58PM -0800, Yury Norov wrote:
> bitmap_weight() counts all set bits in the bitmap unconditionally.
> However in some cases we can traverse a part of bitmap when we
> only need to check if number of set bits is greater, less or equal
> to some number.
>
> This patch re
On Thu, Nov 25, 2021 at 08:18:50AM +0530, Athira Rajeev wrote:
> Sort key p_stage_cyc is used to present the latency
> cycles spend in pipeline stages. perf tool has local
> p_stage_cyc sort key to display this info. There is no
> global variant available for this sort key. local variant
> shows la
https://bugzilla.kernel.org/show_bug.cgi?id=213837
--- Comment #13 from Erhard F. (erhar...@mailbox.org) ---
Created attachment 299755
--> https://bugzilla.kernel.org/attachment.cgi?id=299755&action=edit
dmesg (5.16-rc2 + patch, PowerMac G5 11,2)
Still happens with with 5.16-rc2, but getting a
45 matches
Mail list logo