Hi! Find below my third regression report for Linux 4.14. It lists 9
regressions I'm currently aware of. Two regressions got fixed since last
weeks report.
As always: Are you aware of any other regressions? Then please let me
know by mail (a simple bounce or forward in my direction is enough!).
Fo
...
[1069001518.00] [c03f95b3f770] [c00b2574]
init_imc_pmu+0x1f4/0xc40
[1069005374.00] [c03f95b3f850] [c008fec8]
opal_imc_counters_probe+0x2e8/0x3e0
[1069009426.00] [c03f95b3f950] [c06153a4]
platform_drv_probe+0x44/0x90
[1069012818.00] [c03
alloc_pages_node() when passed NUMA_NO_NODE for the
node_id, could get memory from closest node. Cleanup
core imc and thread imc memory init functions to use
NUMA_NO_NODE.
Signed-off-by: Madhavan Srinivasan
---
arch/powerpc/perf/imc-pmu.c | 8 +++-
1 file changed, 3 insertions(+), 5 deletion
On Sun, Oct 15, 2017 at 02:55:24PM +0200, Thorsten Leemhuis wrote:
> == Fixed since last report ==
>
> "hangs when building e.g. perf" & "Random insta-reboots on AMD Phenom II"
> Status: Fixed by https://git.kernel.org/torvalds/c/67bb8e999e0a
That should be: b956575bed91 ("x86/mm: Flush more aggr
On 10/14/2017 06:13 AM, Benjamin Herrenschmidt wrote:
> No, he's saying this is useful for the developers when debugging the
> kernel driver (or for asking users to "test" something as part of
> debugging a driver problem).
>
> It's common to have various command line options affecting PCIe
> beha
On Wed, 04 Oct 2017 20:04:52 +1100
Michael Ellerman wrote:
> Hi Balbir,
>
> Mainly I think we need to add a check in mark_initmem_nx() too don't we?
> Or should we put it somewhere common to both?
>
> But seeing as I'm replying here are some more comments.
>
> > Subject: [PATCH 1/2] powerpc/st
On Wed, 04 Oct 2017 22:14:17 +1100
Michael Ellerman wrote:
> Balbir Singh writes:
>
> > We were aggressive with splitting regions and missed the case
> > when _stext and __init_begin completely overlap addr and addr+mapping.
> >
> > This patch fixes that case and allows us to keep the largest p
On Mon, 16 Oct 2017 00:13:42 +0530
Madhavan Srinivasan wrote:
> alloc_pages_node() when passed NUMA_NO_NODE for the
> node_id, could get memory from closest node. Cleanup
> core imc and thread imc memory init functions to use
> NUMA_NO_NODE.
The changelog is not clear, alloc_pages_node() takes
On Thu, 12 Oct 2017 18:20:39 +0800
kbuild test robot wrote:
> Hi Balbir,
>
> [auto build test ERROR on powerpc/next]
> [also build test ERROR on v4.14-rc4 next-20171009]
> [if your patch is applied to the wrong git tree, please drop us a note to
> help improve the system]
>
> url:
> https:
On Fri, 2017-10-13 at 12:30 +0800, wei.guo.si...@gmail.com wrote:
> From: Simon Guo
>
> This patch adjust selftest memcmp_64 so that memcmp selftest can be
> compiled successfully.
>
Do they not compile at the moment?
> It also adds testcases for:
> - memcmp over 4K bytes size.
> - s1/s2 with
It would be nice to be able to dump page tables in a particular
context.
eg: dumping vmalloc space:
0:mon> dv 0xd00037f0
pgd @ 0xc17c
pgdp @ 0xc17c00d8 = 0xf10b1000
pudp @ 0xc000f10b13f8 = 0xf10d
pmdp @ 0xc000f10d1ff8 = 0x
Michael Ellerman writes:
> From: Balbir Singh
>
> It would be nice to be able to dump page tables in a particular
> context.
>
> eg: dumping vmalloc space:
>
> 0:mon> dv 0xd00037f0
> pgd @ 0xc17c
> pgdp @ 0xc17c00d8 = 0xf10b1000
> pudp @ 0xc000f10
On Mon, Oct 16, 2017 at 2:33 PM, Balbir Singh wrote:
> It would be nice to be able to dump page tables in a particular
> context.
>
Should be v4 and not v2.. resending
Balbir
It would be nice to be able to dump page tables in a particular
context.
eg: dumping vmalloc space:
0:mon> dv 0xd00037f0
pgd @ 0xc17c
pgdp @ 0xc17c00d8 = 0xf10b1000
pudp @ 0xc000f10b13f8 = 0xf10d
pmdp @ 0xc000f10d1ff8 = 0x
Add a selftest to exercise the powerpc alignment fault handler.
Signed-off-by: Michael Neuling
Signed-off-by: Andrew Donnellan
---
tools/testing/selftests/powerpc/alignment/Makefile | 3 +-
.../powerpc/alignment/alignment_handler.c | 488 +
2 files changed, 490 in
On Mon, Oct 16, 2017 at 2:34 PM, Aneesh Kumar K.V
wrote:
> Michael Ellerman writes:
>
>> From: Balbir Singh
>>
>> It would be nice to be able to dump page tables in a particular
>> context.
>>
>> eg: dumping vmalloc space:
>>
>> 0:mon> dv 0xd00037f0
>> pgd @ 0xc17c
>>
Balbir Singh writes:
> There are no users of get_mce_fault_addr()
>
> Fixes: b63a0ff ("powerpc/powernv: Machine check exception handling.")
That fixes line is wrong, get_mce_fault_addr() was used in that commit.
The last usage was removed in:
1363875bdb63 ("powerpc/64s: fix handling of non-s
Balbir Singh writes:
> Use the same alignment as Effective address
> and rename phyiscal address to Page Frame Number
You didn't do the 2nd part AFAICS?
Will fix it up.
cheers
> diff --git a/arch/powerpc/kernel/mce.c b/arch/powerpc/kernel/mce.c
> index e254399..fef1408 100644
> --- a/arch/pow
When using the radix MMU on Power9 DD1, to work around a hardware
problem, radix__pte_update() is required to do a two stage update of
the PTE. First we write a zero value into the PTE, then we flush the
TLB, and then we write the new PTE value.
In the normal case that works OK, but it does not wo
On Mon, Oct 16, 2017 at 4:13 PM, Michael Ellerman wrote:
> Balbir Singh writes:
>
>> There are no users of get_mce_fault_addr()
>>
>> Fixes: b63a0ff ("powerpc/powernv: Machine check exception handling.")
>
> That fixes line is wrong, get_mce_fault_addr() was used in that commit.
>
> The last usag
On 10/16/2017 10:43 AM, Balbir Singh wrote:
On Mon, Oct 16, 2017 at 2:34 PM, Aneesh Kumar K.V
wrote:
Michael Ellerman writes:
+
+#ifdef CONFIG_HUGETLB_PAGE
+ if (pud_huge(*pudp)) {
+ format_pte(pudp, pud_val(*pudp));
+ return;
+ }
+#endif
For page table w
Balbir Singh writes:
> diff --git a/arch/powerpc/kernel/mce_power.c b/arch/powerpc/kernel/mce_power.c
> index b76ca19..0e584d5 100644
> --- a/arch/powerpc/kernel/mce_power.c
> +++ b/arch/powerpc/kernel/mce_power.c
> @@ -27,6 +27,36 @@
> #include
> #include
> #include
> +#include
> +#includ
On Mon, Oct 16, 2017 at 4:18 PM, Michael Ellerman wrote:
> Balbir Singh writes:
>
>> Use the same alignment as Effective address
>> and rename phyiscal address to Page Frame Number
>
> You didn't do the 2nd part AFAICS?
>
> Will fix it up.
Sorry the changelog is buggy :( Nick asked me to keep th
On Mon, Oct 16, 2017 at 4:36 PM, Michael Ellerman wrote:
> Balbir Singh writes:
>
>> diff --git a/arch/powerpc/kernel/mce_power.c
>> b/arch/powerpc/kernel/mce_power.c
>> index b76ca19..0e584d5 100644
>> --- a/arch/powerpc/kernel/mce_power.c
>> +++ b/arch/powerpc/kernel/mce_power.c
>> @@ -27,6 +2
Balbir Singh writes:
> If we are in user space and hit a UE error, we now have the
> basic infrastructure to walk the page tables and find out
> the effective address that was accessed, since the DAR
> is not valid.
>
> We use a work_queue content to hookup the bad pfn, any
> other context causes
On Mon, Oct 16, 2017 at 4:38 PM, Michael Ellerman wrote:
> Balbir Singh writes:
>
>> If we are in user space and hit a UE error, we now have the
>> basic infrastructure to walk the page tables and find out
>> the effective address that was accessed, since the DAR
>> is not valid.
>>
>> We use a w
At the moment, on 256CPU + 256 PCI devices guest, it takes the guest
about 8.5sec to read the entire device tree. Some explanation can be
found here: https://patchwork.ozlabs.org/patch/826124/ but mostly it is
because the kernel traverses the tree twice and it calls "getprop" for
each properly whic
Current vDSO64 implementation does not have support for coarse clocks
(CLOCK_MONOTONIC_COARSE, CLOCK_REALTIME_COARSE), for which it falls back
to system call, increasing the response time, vDSO implementation reduces
the cycle time. Below is a benchmark of the difference in execution times.
(Non-c
On Mon, Oct 16, 2017 at 04:49:17PM +1100, Alexey Kardashevskiy wrote:
> At the moment, on 256CPU + 256 PCI devices guest, it takes the guest
> about 8.5sec to read the entire device tree. Some explanation can be
> found here: https://patchwork.ozlabs.org/patch/826124/ but mostly it is
> because the
On 16/10/17 17:11, David Gibson wrote:
> On Mon, Oct 16, 2017 at 04:49:17PM +1100, Alexey Kardashevskiy wrote:
>> At the moment, on 256CPU + 256 PCI devices guest, it takes the guest
>> about 8.5sec to read the entire device tree. Some explanation can be
>> found here: https://patchwork.ozlabs.org/
On Mon, Oct 16, 2017 at 05:22:55PM +1100, Alexey Kardashevskiy wrote:
> On 16/10/17 17:11, David Gibson wrote:
> > On Mon, Oct 16, 2017 at 04:49:17PM +1100, Alexey Kardashevskiy wrote:
> >> At the moment, on 256CPU + 256 PCI devices guest, it takes the guest
> >> about 8.5sec to read the entire dev
31 matches
Mail list logo