cture to disable inline instrumentation
> kasan: allow architectures to provide an outline readiness check
> mm: define default MAX_PTRS_PER_* in include/pgtable.h
> kasan: use MAX_PTRS_PER_* for early shadow tables
>
The series seems reasonable
Reviewed-by: Balbir Singh
we start at c00e...
> >> + */
> >> +
> >
> > assuming we have
> > #define VMEMMAP_END R_VMEMMAP_END
> > and ditto for hash we probably need
> >
> > BUILD_BUG_ON(VMEMMAP_END + KASAN_SHADOW_OFFSET != KASAN_SHADOW_END);
>
> Sorry, I'm not sure what this is supposed to be testing? In what
> situation would this trigger?
>
I am bit concerned that we have hard coded (IIR) 0xa80e... in the
config, any changes to VMEMMAP_END, KASAN_SHADOW_OFFSET/END
should be guarded.
Balbir Singh.
ny code that runs with translations off after
> booting. Take this approach for now and require outline instrumentation.
>
> Previous attempts allowed inline instrumentation. However, they came with
> some unfortunate restrictions: only physically contiguous memory could be
> used and
VE_ARCH_KASAN_HW_TAGS
> config HAVE_ARCH_KASAN_VMALLOC
> bool
>
> +config ARCH_DISABLE_KASAN_INLINE
> + def_bool n
> +
Some comments on what arch's want to disable kasan inline would
be helpful and why.
Balbir Singh.
. Both 64k and 4k pages work. Running as a KVM host works, but
> nothing in arch/powerpc/kvm is instrumented. It's also potentially a bit
> fragile - if any real mode code paths call out to instrumented code, things
> will go boom.
>
The last time I checked, the changes for real mode, made the code hard to
review/maintain. I am happy to see that we've decided to leave that off
the table for now, reviewing the series
Balbir Singh.
RS_PER_*s in the same style as MAX_PTRS_PER_P4D.
> As KASAN is the only user at the moment, just define them in the kasan
> header, and have them default to PTRS_PER_* unless overridden in arch
> code.
>
> Suggested-by: Christophe Leroy
> Suggested-by: Balbir Singh
> Signe
cross different configurations?
>> BTW, the current set of patches just hang if I try to make the default
>> mode as out of line
>
> Do you have CONFIG_RELOCATABLE?
>
> I've tested the following process:
>
> # 1) apply patches on a fresh linux-next
> # 2) output dir
> mkdir ../out-3s-kasan
>
> # 3) merge in the relevant config snippets
> cat > kasan.config << EOF
> CONFIG_EXPERT=y
> CONFIG_LD_HEAD_STUB_CATCH=y
>
> CONFIG_RELOCATABLE=y
>
> CONFIG_KASAN=y
> CONFIG_KASAN_GENERIC=y
> CONFIG_KASAN_OUTLINE=y
>
> CONFIG_PHYS_MEM_SIZE_FOR_KASAN=2048
> EOF
>
I think I got CONFIG_PHYS_MEM_SIZE_FOR_KASN wrong, honestly I don't get why
we need this size? The size is in MB and the default is 0.
Why does the powerpc port of KASAN need the SIZE to be explicitly specified?
Balbir Singh.
k
>
> It's actually super easy to do simple boot tests with qemu, it works fine in
> TCG,
> Michael's wiki page at
> https://github.com/linuxppc/wiki/wiki/Booting-with-Qemu is very helpful.
>
> I did this a lot in development.
>
> My full commandline, fwiw, is:
>
> qemu-system-ppc64 -m 8G -M pseries -cpu power9 -kernel
> ../out-3s-radix/vmlinux -nographic -chardev stdio,id=charserial0,mux=on
> -device spapr-vty,chardev=charserial0,reg=0x3000 -initrd
> ./rootfs-le.cpio.xz -mon chardev=charserial0,mode=readline -nodefaults -smp 4
qemu has been crashing with KASAN enabled/ both inline/out-of-line options. I
am running linux-next + the 4 patches you've posted. In one case I get a panic
and a hang in the other. I can confirm that when I disable KASAN, the issue
disappears
Balbir Singh.
>
> Regards,
> Daniel
>
On 10/12/19 3:47 pm, Daniel Axtens wrote:
> KASAN support on powerpc64 is challenging:
>
> - We want to be able to support inline instrumentation so as to be
>able to catch global and stack issues.
>
> - We run some code in real mode after boot, most notably a lot of
>KVM code. We'd
On 10/12/19 3:47 pm, Daniel Axtens wrote:
> This helps with powerpc support, and should have no effect on
> anything else.
>
> Suggested-by: Christophe Leroy
> Signed-off-by: Daniel Axtens
If you follow the recommendations by Christophe and I, you don't need this
S_PER_PMD ((H_PTRS_PER_PMD > R_PTRS_PER_PMD) ? \
> + H_PTRS_PER_PMD : R_PTRS_PER_PMD)
> +#define MAX_PTRS_PER_PUD ((H_PTRS_PER_PUD > R_PTRS_PER_PUD) ? \
> + H_PTRS_PER_PUD : R_PTRS_PER_PUD)
> +
How about reusing max
#define MAX_PTRS_PER_PTE max(H_PTRS_PER_PTE, R_PTRS_PER_PTE)
#define MAX_PTRS_PER_PMD max(H_PTRS_PER_PMD, R_PTRS_PER_PMD)
#define MAX_PTRS_PER_PUD max(H_PTRS_PER_PUD, R_PTRS_PER_PUD)
Balbir Singh.
}
> + }
> +
> + return n;
Do we always return n independent of the check_copy_size return value and
access_ok return values?
Balbir Singh.
> +}
> +
> extern unsigned long __clear_user(void __user *addr, unsigned long size);
>
> static inline unsigned long clear_user(void __user *addr, unsigned long size)
>
Cc: Mahesh Salgaonkar
> Signed-off-by: Santosh Sivaraj
> ---
Isn't this based on https://patchwork.ozlabs.org/patch/895294/? If so it should
still have my author tag and signed-off-by
Balbir Singh
> arch/powerpc/include/asm/mce.h | 4 +++-
> arch/powerpc/kernel/mce.c
On 12/8/19 7:22 pm, Santosh Sivaraj wrote:
> Certain architecture specific operating modes (e.g., in powerpc machine
> check handler that is unable to access vmalloc memory), the
> search_exception_tables cannot be called because it also searches the
> module exception tables if entry is not fou
lgaonkar
> Signed-off-by: Santosh Sivaraj
> Cc: sta...@vger.kernel.org # v4.15+
> ---
Acked-by: Balbir Singh
On Mon, Feb 18, 2019 at 11:49:18AM +1100, Michael Ellerman wrote:
> Balbir Singh writes:
> > On Sun, Feb 17, 2019 at 07:34:20PM +1100, Michael Ellerman wrote:
> >> Balbir Singh writes:
> >> > On Sat, Feb 16, 2019 at 08:22:12AM -0600, Segher Boessenkool wrote:
>
On Sun, Feb 17, 2019 at 07:34:20PM +1100, Michael Ellerman wrote:
> Balbir Singh writes:
> > On Sat, Feb 16, 2019 at 08:22:12AM -0600, Segher Boessenkool wrote:
> >> Hi all,
> >>
> >> On Sat, Feb 16, 2019 at 09:55:11PM +1100, Balbir Singh wrote:
> >&g
the kasan core are going to be required
> for hash and radix as well.
>
Thanks for following through with this, could you please share details on
how you've been testing this?
I know qemu supports qemu -cpu e6500, but beyond that what does the machine
look like?
Balbir Singh.
On Sat, Feb 16, 2019 at 08:22:12AM -0600, Segher Boessenkool wrote:
> Hi all,
>
> On Sat, Feb 16, 2019 at 09:55:11PM +1100, Balbir Singh wrote:
> > On Thu, Feb 14, 2019 at 05:23:39PM +1100, Michael Ellerman wrote:
> > > In v4.20 we changed our pgd/pud_present() to
);
> }
>
> extern struct page *pud_page(pud_t pud);
> @@ -951,7 +951,7 @@ static inline int pgd_none(pgd_t pgd)
>
> static inline int pgd_present(pgd_t pgd)
> {
> - return (pgd_raw(pgd) & cpu_to_be64(_PAGE_PRESENT));
> + return !!(pgd_raw(pgd) & cpu_to_be64(_PAGE_PRESENT));
> }
>
Care to put a big FAT warning, so that we don't repeat this again
(as in authors planning on changing these bits).
Balbir Singh.
On Wed, Feb 6, 2019 at 3:44 PM Michael Ellerman wrote:
>
> Balbir Singh writes:
> > On Tue, Feb 5, 2019 at 10:24 PM Michael Ellerman
> > wrote:
> >> Balbir Singh writes:
> >> > On Sat, Feb 2, 2019 at 12:14 PM Balbir Singh
> >> > wrote:
&g
e looks good to me as well.
>
> Reviewed-by: Alistair Popple
>
I checked the three callers of set_pte_at_notify and the assumption
seems correct
Reviewed-by: Balbir Singh
On Tue, Feb 5, 2019 at 10:24 PM Michael Ellerman wrote:
>
> Balbir Singh writes:
> > On Sat, Feb 2, 2019 at 12:14 PM Balbir Singh wrote:
> >>
> >> On Tue, Jan 22, 2019 at 10:57:21AM -0500, Joe Lawrence wrote:
> >> > From: Nicolai Stange
> >>
On Sat, Feb 2, 2019 at 12:14 PM Balbir Singh wrote:
>
> On Tue, Jan 22, 2019 at 10:57:21AM -0500, Joe Lawrence wrote:
> > From: Nicolai Stange
> >
> > The ppc64 specific implementation of the reliable stacktracer,
> > save_stack_trace_tsk_reliable(), bails
On Tue, Jan 22, 2019 at 10:57:21AM -0500, Joe Lawrence wrote:
> From: Nicolai Stange
>
> The ppc64 specific implementation of the reliable stacktracer,
> save_stack_trace_tsk_reliable(), bails out and reports an "unreliable
> trace" whenever it finds an exception frame on the stack. Stack frames
gt; arch-specific implementations consistent.
>
> Signed-off-by: Joe Lawrence
Seems straight forward
Acked-by: Balbir Singh
On Sat, Jan 12, 2019 at 02:45:41AM -0600, Segher Boessenkool wrote:
> On Sat, Jan 12, 2019 at 12:09:14PM +1100, Balbir Singh wrote:
> > Could you please define interesting frame on top a bit more? Usually
> > the topmost return address is in LR
>
> There is no reliable
c0abd628
> c0abd628 (T) schedule+0x48
>
> [ ... etc ... ]
>
>
> save_stack_trace_tsk_reliable
> =========
>
> arch/powerpc/kernel/stacktrace.c :: save_stack_trace_tsk_reliable() does
> take into account the first stackframe, but only to verify that the link
> register is indeed pointing at kernel code address.
>
> Can someone explain what __switch_to is doing with the stack and whether
> in such circumstances, the reliable stack unwinder should be skipping
> the first frame when verifying the frame contents like STACK_FRAME_MARKER,
> etc.
>
> I may be on the wrong path in debugging this, but figuring out this
> sp[0] frame state would be helpful.
>
I would compare the output of xmon across the unreliable stack frames with
the contents of what the stack unwinder has.
I suspect the patch is stuck trying to transition to enabled state, it'll
be interesting to see if we are really stuck
Balbir Singh.
r 1 GPU and attached NPUs for POWER8 */
> - pe->npucomp = kzalloc(sizeof(pe->npucomp), GFP_KERNEL);
> + pe->npucomp = kzalloc(sizeof(*pe->npucomp), GFP_KERNEL);
To avoid these in the future, I wonder if instead of sizeof(pe->npucomp), we
insist on
sizeof structure
pe->npucomp = kzalloc(sizeof(struct npucomp), GFP_KERNEL);
Acked-by: Balbir Singh
is should allow better concurrency for massively threaded
Question -- I presume mmap_sem (rw_semaphore implementation tested against)
was qrwlock?
Balbir Singh.
On Mon, Oct 22, 2018 at 10:48:36AM +0530, Bharata B Rao wrote:
> H_SVM_INIT_START: Initiate securing a VM
> H_SVM_INIT_DONE: Conclude securing a VM
>
> During early guest init, these hcalls will be issued by UV.
> As part of these hcalls, [un]register memslots with UV.
>
> Signed-off-by: Bharata
On Mon, Oct 22, 2018 at 10:48:35AM +0530, Bharata B Rao wrote:
> A secure guest will share some of its pages with hypervisor (Eg. virtio
> bounce buffers etc). Support shared pages in HMM driver.
>
> Signed-off-by: Bharata B Rao
> ---
> arch/powerpc/kvm/book3s_hv_hmm.c | 69 +
);
> + vma = find_vma_intersection(mm, addr, end);
> + if (!vma || vma->vm_start > addr || vma->vm_end < end) {
> + ret = H_PARAMETER;
> + goto out;
> + }
> + ret = migrate_vma(&kvmppc_hmm_fault_migrate_ops, vma, addr, end,
> + &src_pfn, &dst_pfn, NULL);
> + if (ret < 0)
> + ret = H_PARAMETER;
> +out:
> + up_read(&mm->mmap_sem);
> + return ret;
> +}
> +
> +/*
> + * TODO: Number of secure pages and the page size order would probably come
> + * via DT or via some uvcall. Return 8G for now.
> + */
> +static unsigned long kvmppc_get_secmem_size(void)
> +{
> + return (1UL << 33);
> +}
> +
> +static int kvmppc_hmm_pages_init(void)
> +{
> + unsigned long nr_pfns = kvmppc_hmm->devmem->pfn_last -
> + kvmppc_hmm->devmem->pfn_first;
> +
> + kvmppc_hmm->pfn_bitmap = kcalloc(BITS_TO_LONGS(nr_pfns),
> + sizeof(unsigned long), GFP_KERNEL);
> + if (!kvmppc_hmm->pfn_bitmap)
> + return -ENOMEM;
> +
> + spin_lock_init(&kvmppc_hmm_lock);
> +
> + return 0;
> +}
> +
> +int kvmppc_hmm_init(void)
> +{
> + int ret = 0;
> + unsigned long size = kvmppc_get_secmem_size();
Can you split secmem to secure_mem?
> +
> + kvmppc_hmm = kzalloc(sizeof(*kvmppc_hmm), GFP_KERNEL);
> + if (!kvmppc_hmm) {
> + ret = -ENOMEM;
> + goto out;
> + }
> +
> + kvmppc_hmm->device = hmm_device_new(NULL);
> + if (IS_ERR(kvmppc_hmm->device)) {
> + ret = PTR_ERR(kvmppc_hmm->device);
> + goto out_free;
> + }
> +
> + kvmppc_hmm->devmem = hmm_devmem_add(&kvmppc_hmm_devmem_ops,
> + &kvmppc_hmm->device->device, size);
IIUC, there is just one HMM device for all the secure memory in the
system?
> + if (IS_ERR(kvmppc_hmm->devmem)) {
> + ret = PTR_ERR(kvmppc_hmm->devmem);
> + goto out_device;
> + }
> + ret = kvmppc_hmm_pages_init();
> + if (ret < 0)
> + goto out_devmem;
> +
> + return ret;
> +
> +out_devmem:
> + hmm_devmem_remove(kvmppc_hmm->devmem);
> +out_device:
> + hmm_device_put(kvmppc_hmm->device);
> +out_free:
> + kfree(kvmppc_hmm);
> + kvmppc_hmm = NULL;
> +out:
> + return ret;
> +}
> +
> +void kvmppc_hmm_free(void)
> +{
> + kfree(kvmppc_hmm->pfn_bitmap);
> + hmm_devmem_remove(kvmppc_hmm->devmem);
> + hmm_device_put(kvmppc_hmm->device);
> + kfree(kvmppc_hmm);
> + kvmppc_hmm = NULL;
> +}
Balbir Singh.
On Sat, Oct 27, 2018 at 12:39:17PM -0700, Joel Fernandes wrote:
> Hi Balbir,
>
> On Sat, Oct 27, 2018 at 09:21:02PM +1100, Balbir Singh wrote:
> > On Wed, Oct 24, 2018 at 07:13:50PM -0700, Joel Fernandes wrote:
> > > On Wed, Oct 24, 2018 at 10:57:33PM +
On Wed, Oct 24, 2018 at 07:13:50PM -0700, Joel Fernandes wrote:
> On Wed, Oct 24, 2018 at 10:57:33PM +1100, Balbir Singh wrote:
> [...]
> > > > + pmd_t pmd;
> > > > +
> > > > + new_ptl = pmd_lockptr(mm, new_pmd);
> >
On Wed, Oct 24, 2018 at 01:12:56PM +0300, Kirill A. Shutemov wrote:
> On Fri, Oct 12, 2018 at 06:31:58PM -0700, Joel Fernandes (Google) wrote:
> > diff --git a/mm/mremap.c b/mm/mremap.c
> > index 9e68a02a52b1..2fd163cff406 100644
> > --- a/mm/mremap.c
> > +++ b/mm/mremap.c
> > @@ -191,6 +191,54 @@
Cc: Michael Ellerman
> Cc: Rashmica Gupta
> Cc: Balbir Singh
> Cc: Michael Neuling
> Reviewed-by: Pavel Tatashin
> Reviewed-by: Rashmica Gupta
> Signed-off-by: David Hildenbrand
> ---
> arch/powerpc/platforms/powernv/memtrace.c | 4 +++-
> 1 file changed, 3 insert
On Wed, Sep 19, 2018 at 09:35:07AM +0200, David Hildenbrand wrote:
> Am 19.09.18 um 03:22 schrieb Balbir Singh:
> > On Tue, Sep 18, 2018 at 01:48:16PM +0200, David Hildenbrand wrote:
> >> Reading through the code and studying how mem_hotplug_lock is to be used,
> >> I
write the locks need to be held? For example can the device_hotplug_lock
be held in read mode while add/remove memory via (mem_hotplug_lock) is held
in write mode?
Balbir Singh.
On Thu, Jun 21, 2018 at 6:31 PM, Aneesh Kumar K.V
wrote:
>
> We do this only with VMEMMAP config so that our page_to_[nid/section] etc are
> not
> impacted.
>
> Signed-off-by: Aneesh Kumar K.V
Why 128TB, given that it's sparse_vmemmap_extreme by default, why not
1PB dire
avoids the old timespec type and the HW access.
>
> Signed-off-by: Arnd Bergmann
> ---
Looks good to me!
Acked-by: Balbir Singh
Balbir Singh
sion: Linux version 4.17.0-autotest
>>> >>
>>> >>I am seeing this bug on rc7 as well.
>
> Observing similar traces on linux next kernel: 4.17.0-next-20180608-autotest
>
> Block size [0x400] unaligned hotplug range: start 0x22000, size
> 0x100
size < block_size in this case, why? how? Could you confirm that the block size
is 64MB and your trying to remove 16MB
Balbir Singh.
On 12/06/18 06:20, Mathieu Malaterre wrote:
> Hi Meelis,
>
> On Mon, Jun 11, 2018 at 1:21 PM Meelis Roos wrote:
>> I am seeing this on PowerMac G4 with sungem ethernet driver. 4.17 was
>> OK, 4.17.0-10146-gf0dc7f9c6dd9 is problematic.
> Same here.
>
>> [ 140.518664] eth0: hw csum failure
>> [
On Fri, Jun 1, 2018 at 2:54 PM, Gautham R Shenoy
wrote:
> Hi Balbir,
>
> Thanks for reviewing the patch!
>
> On Fri, Jun 01, 2018 at 12:51:05AM +1000, Balbir Singh wrote:
>> On Thu, May 31, 2018 at 10:15 PM, Gautham R. Shenoy
>
> [..snip..]
>> >
>&g
&drv->states[i];
> + struct cpuidle_state_usage *su = &dev->states_usage[i];
> +
> + if (s->disabled || su->disable)
> + continue;
> +
> + return s->target_residency * tb_ticks_per_usec;
Can we ensure this is not prone to overflow?
Otherwise looks good
Reviewed-by: Balbir Singh
u
want to look them.
Your right in that we'll try to allocate 128 MB from
the CMA region (based on the 1/128th calculation that I remember). If we
can figure out what's allocated memory in the CMA region we can debug
this further. Do the sum of HPT allocations add up to the used CMA memory?
Balbir Singh.
t cpus, where
as one of the cpus
might be online and be beyond the max present cpus, due to the hole..
Reviewed-by: Balbir Singh
Balbir Singh.
currently it means the opposite, the general interrupt type has been
> disabled).
>
> Fix this by using the name irqmask, and printing it in hex.
>
> Signed-off-by: Nicholas Piggin
>
Acked-by: Balbir Singh
On Wed, May 9, 2018 at 5:43 PM, Nicholas Piggin wrote:
> On Wed, 9 May 2018 17:07:47 +1000
> Balbir Singh wrote:
>
>> On Wed, May 9, 2018 at 4:51 PM, Nicholas Piggin wrote:
>> > Radix flushes the TLB when updating ptes to increase permissiveness
>> > of prot
wc ? RIC_FLUSH_ALL:
> RIC_FLUSH_TLB);
> + } else {
> + if (mm_is_singlethreaded(mm)) {
> + _tlbie_pid(pid, RIC_FLUSH_ALL);
> + mm_reset_thread_local(mm);
> + } else {
> + if (mm_needs_flush_escalation(mm))
> + also_pwc = true;
> +
> + _tlbie_pid(pid, also_pwc ? RIC_FLUSH_ALL :
> RIC_FLUSH_TLB);
> + }
> + }
> } else {
> if (local)
> _tlbiel_va_range(start, end, pid, page_size, psize,
> also_pwc);
Looks good otherwise
Reviewed-by: Balbir Singh
Balbir Singh.
_flags(vma->vm_mm, ptep, entry, address);
> - flush_tlb_page(vma, address);
> + if (!IS_ENABLED(CONFIG_PPC_BOOK3S_64))
> + flush_tlb_page(vma, address);
Same as above
Balbir Singh.
ther this is all too late for 4.17 is another question...
>
> Here is the x86 version of a 'bytes remaining' memcpy_mcsafe() implemenation:
>
> https://lists.01.org/pipermail/linux-nvdimm/2018-May/015548.html
Thanks for the heads up! I'll work on the implementation for powerpc.
Balbir Singh.
On Wed, May 2, 2018 at 4:26 PM, Alexey Kardashevskiy wrote:
> On 2/5/18 3:53 pm, Balbir Singh wrote:
>> On Wed, 2 May 2018 14:07:23 +1000
>> Alexey Kardashevskiy wrote:
>>
>>> At the moment we only support in the host the IOMMU page sizes which
>>> the gu
On Wed, May 2, 2018 at 6:38 PM, Nicholas Piggin wrote:
> On Tue, 01 May 2018 23:07:28 +1000
> Balbir Singh wrote:
>
>> On Tue, 2018-05-01 at 12:22 +1000, Nicholas Piggin wrote:
>> > Provide timebase and timebase of last heartbeat in watchdog lockup
>> > messag
On Wed, 2 May 2018 15:10:33 +1000
rashmica wrote:
> Tested hot-unplugging dimm device on radix guest on p9 host with KVM.
>
>
> On 01/05/18 12:57, Balbir Singh wrote:
> > This commit was a stop-gap to prevent crashes on hotunplug, caused by
> > the mismatch between the
On Wed, 2 May 2018 14:07:23 +1000
Alexey Kardashevskiy wrote:
> At the moment we only support in the host the IOMMU page sizes which
> the guest is aware of, which is 4KB/64KB/16MB. However P9 does not support
> 16MB IOMMU pages, 2MB and 1GB pages are supported instead. We can still
> emulate bi
ooks like you don't want
to reset the tb, but I would split it out
> wd_smp_lock(&flags);
> if (cpumask_test_cpu(cpu, &wd_smp_cpus_stuck)) {
> wd_smp_unlock(&flags);
> @@ -254,7 +267,10 @@ void soft_nmi_interrupt(struct pt_regs *regs)
> }
> set_cpu_stuck(cpu, tb);
>
> - pr_emerg("CPU %d self-detected hard LOCKUP @ %pS\n", cpu, (void
> *)regs->nip);
> + pr_emerg("CPU %d self-detected hard LOCKUP @ %pS\n",
> + cpu, (void *)regs->nip);
> + pr_emerg("CPU %d TB:%lld, last heartbeat TB:%lld\n",
> + cpu, get_tb(), per_cpu(wd_timer_tb, cpu));
> print_modules();
> print_irqtrace_events(current);
> show_regs(regs);
Balbir Singh.
("powerpc/mm/radix: Split linear mapping on hot-unplug").
Signed-off-by: Balbir Singh
Signed-off-by: Michael Neuling
---
Resend with a newer commit message grabbed from an email sent by mpe.
arch/powerpc/platforms/powernv/setup.c | 10 +-
1 file changed, 1 insertion(+), 9
On Mon, Apr 30, 2018 at 8:43 PM, Michael Ellerman wrote:
> Balbir Singh writes:
>> This reverts commit 53ecde0b9126ff140abe3aefd7f0ec64d6fa36b0.
>
> Firstly everything here only applies to Radix, so we need to say that.
The subject mentions it :)
>
>> The commit above c
ff-by: Balbir Singh
Signed-off-by: Michael Neuling
---
arch/powerpc/platforms/powernv/setup.c | 10 +-
1 file changed, 1 insertion(+), 9 deletions(-)
diff --git a/arch/powerpc/platforms/powernv/setup.c
b/arch/powerpc/platforms/powernv/setup.c
index ef8c9ce53a61..fa63d3fff14c 100644
--- a
_pfn(regs, addr,
> + phys_addr);
> }
> found = 1;
> }
> @@ -572,7 +570,7 @@ static long mce_handle_error(struct pt_regs *regs,
> const struct mce_ierror_table itable[])
> {
> struct mce_error_info mce_err = { 0 };
> - uint64_t addr, phys_addr;
> + uint64_t addr, phys_addr = ULONG_MAX;
> uint64_t srr1 = regs->msr;
> long handled;
>
>
Reviewed-by: Balbir Singh
On Mon, 2018-04-23 at 23:01 +1000, Nicholas Piggin wrote:
> On Mon, 23 Apr 2018 21:14:12 +1000
> Balbir Singh wrote:
>
> > On Mon, Apr 23, 2018 at 8:33 PM, Mahesh Jagannath Salgaonkar
> > wrote:
> > > On 04/23/2018 12:21 PM, Balbir Singh wrote:
> > > &g
On Mon, Apr 23, 2018 at 8:33 PM, Mahesh Jagannath Salgaonkar
wrote:
> On 04/23/2018 12:21 PM, Balbir Singh wrote:
>> On Mon, Apr 23, 2018 at 2:59 PM, Mahesh J Salgaonkar
>> wrote:
>>> From: Mahesh Salgaonkar
>>>
>>> The current code extracts the physica
On Mon, Apr 23, 2018 at 4:51 PM, Balbir Singh wrote:
> On Mon, Apr 23, 2018 at 2:59 PM, Mahesh J Salgaonkar
> wrote:
>> From: Mahesh Salgaonkar
>>
>> The current code extracts the physical address for UE errors and then
>> hooks it up into memory failure infrastruc
[ 325.384336] Severe Machine check interrupt [Not recovered]
How did you test for this? If the error was recovered, shouldn't the
process have gotten
a SIGBUS and we should have prevented further access as a part of the handling
(memory_failure()). Do we just need a MF_MUST_KILL in the flags?
Why shouldn't we treat it as handled if we isolate the page?
Thanks,
Balbir Singh.
On Mon, 16 Apr 2018 16:57:12 +0530
"Aneesh Kumar K.V" wrote:
> This patch series add split pmd pagetable lock for book3s64. nohash64 also
> should
> be able to switch to this. I need to workout the code dependency. This series
> also migh have broken the build on platforms otherthan book3s64. I
On Tue, Apr 17, 2018 at 7:17 PM, Balbir Singh wrote:
> On Tue, Apr 17, 2018 at 7:11 PM, Alistair Popple
> wrote:
>> The NPU has a limited number of address translation shootdown (ATSD)
>> registers and the GPU has limited bandwidth to process ATSDs. This can
>> resu
s_create_x64("atsd_threshold",
Nit-picking can we call this atsd_threshold_in_bytes?
> +0600, powerpc_debugfs_root, &atsd_threshold);
> + }
> +
> phb->npu.nmmu_flush =
> of_property_read_bool(phb->hose->dn, "ibm,nmmu-flush");
> for_each_child_of_node(phb->hose->dn, dn) {
Acked-by: Balbir Singh
ddress < end; address += PAGE_SIZE)
> + mmio_invalidate(npu_context, 1, address, false);
>
> - /* Do the flush only on the final addess == end */
> - mmio_invalidate(npu_context, 1, address, true);
> + /* Do the flush only on the final addess == end */
> + mmio_invalidate(npu_context, 1, address, true);
> + }
> }
>
Acked-by: Balbir Singh
gt; + npu_context->priv != priv) {
>> + spin_unlock(&npu_context_lock);
>> + opal_npu_destroy_context(nphb->opal_id, mm->context.id,
>> + PCI_DEVID(gpdev->bus->n
xt->kref, pnv_npu2_release_context);
> + spin_lock(&npu_context_lock);
> + removed = kref_put(&npu_context->kref, pnv_npu2_release_context);
> + spin_unlock(&npu_context_lock);
> +
> + /*
> +* We need to do this outside of pnv_npu2_release_context so that it
> is
> +* outside the spinlock as mmu_notifier_destroy uses SRCU.
> +*/
> + if (removed) {
> + mmu_notifier_unregister(&npu_context->mn,
> + npu_context->mm);
> +
> + kfree(npu_context);
> + }
> +
Reviewed-by: Balbir Singh
On Wed, Apr 11, 2018 at 8:42 PM, Nicholas Piggin wrote:
> On Wed, 11 Apr 2018 20:04:45 +1000
> Balbir Singh wrote:
>
>> On Wed, Apr 11, 2018 at 7:12 PM, Nicholas Piggin wrote:
>> > For consideration:
>> >
>> > * Add IPv6 support built in + additional mo
On Wed, Apr 11, 2018 at 9:05 PM, Michael Ellerman wrote:
> Balbir Singh writes:
>
>> Don't do this via custom code, instead now that we have support
>> in the arch hotplug/hotunplug code, rely on those routines
>> to do the right thing.
>>
>> Fixes: 9d517
that's just an optimisation the
> defconfig target made. make powernv_defconfig adds those to .config.
>
> This results in a significantly smaller vmlinux:
>
>textdata bss dec hex filename
> 131217795284224 1383776 1978977912df7d3 vmlinux
> 121262734771930 1341464 1823966711650b3 vmlinux
>
> Signed-off-by: Nicholas Piggin
> ---
Balbir Singh.
> This could be restored if that was able to be fixed, but for now,
> just remove the tracepoints.
Could you share the stack trace as well? I've not observed this in my testing.
May be I don't have as many cpus. I presume your talking about the per cpu
data offsets for per cpu trace data?
Balbir Singh.
over the weekend for a pull request on Monday.
>
> If anyone wants to add Acks or Reviews I can append them to the merge
> tag. If there are any NAKs please speak up now, but as far as I know
> there are no pending device-tree design concerns.
Hi, Dan
I can ack Oliver's work, will do so in each patch
Overall
Acked-by: Balbir Singh
Balbir Singh
On Fri, Apr 6, 2018 at 11:26 AM, Nicholas Piggin wrote:
> On Thu, 05 Apr 2018 16:40:26 -0400
> Jeff Moyer wrote:
>
>> Nicholas Piggin writes:
>>
>> > On Thu, 5 Apr 2018 15:53:07 +1000
>> > Balbir Singh wrote:
>> >> I'm thinkin
size instead of
ppc64_caches.l1d.line_size
Signed-off-by: Balbir Singh
---
arch/powerpc/platforms/powernv/memtrace.c | 17 -
1 file changed, 17 deletions(-)
diff --git a/arch/powerpc/platforms/powernv/memtrace.c
b/arch/powerpc/platforms/powernv/memtrace.c
index de470caf0784..fc22
es no custom flushing
is needed in the memtrace code.
Signed-off-by: Balbir Singh
---
arch/powerpc/mm/mem.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 85245ef97e72..0a8959b15b39 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/
On Thu, Apr 5, 2018 at 9:26 PM, Oliver wrote:
> On Thu, Apr 5, 2018 at 5:14 PM, Balbir Singh wrote:
>> The pmem infrastructure uses memcpy_mcsafe in the pmem
>> layer so as to convert machine check excpetions into
>> a return value on failure in case a machine check
>&
On Thu, Apr 5, 2018 at 8:56 PM, Anshuman Khandual
wrote:
> There are certian platforms which would like to use SWIOTLB based DMA API
> for bouncing purpose without actually requiring an IOMMU back end. But the
> virtio core does not allow such mechanism. Right now DMA MAP API is only
> selected fo
Don't do this via custom code, instead now that we have support
in the arch hotplug/hotunplug code, rely on those routines
to do the right thing.
Signed-off-by: Balbir Singh
---
arch/powerpc/platforms/powernv/memtrace.c | 17 -
1 file changed, 17 deletions(-)
diff --git a
f the memtrace region, we memset
the regions we are about to hot unplug). After these patches no custom
flushing is needed in the memtrace code.
Signed-off-by: Balbir Singh
---
arch/powerpc/mm/mem.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/
does
not print any error message as the error is treated as
returned via a return value and handled.
Signed-off-by: Balbir Singh
---
arch/powerpc/include/asm/mce.h | 3 +-
arch/powerpc/kernel/mce.c | 77 --
2 files changed, 77 insertions(+), 3
, largely
to keep the patch simple. If needed those optimizations
can be folded in.
Signed-off-by: Balbir Singh
Acked-by: Nicholas Piggin
---
arch/powerpc/include/asm/string.h | 2 +
arch/powerpc/lib/Makefile | 2 +-
arch/powerpc/lib/memcpy_mcsafe_64.S | 212
ift values returned.
Fixes: ba41e1e1ccb9 ("powerpc/mce: Hookup derror (load/store) UE errors")
Signed-off-by: Balbir Singh
---
arch/powerpc/kernel/mce_power.c | 26 --
1 file changed, 16 insertions(+), 10 deletions(-)
diff --git a/arch/powerpc/kernel/mce_power.c b
memcpy_mcsafe
via ioctls.
Changelog v2
- Fix the logic of shifting in addr_to_pfn
- Use shift consistently instead of PAGE_SHIFT
- Fix a typo in patch1
Balbir Singh (3):
powerpc/mce: Bug fixes for MCE handling in kernel space
powerpc/memcpy: Add memcpy_mcsafe for pmem
powerpc/mce: Handle
On Thu, 5 Apr 2018 15:04:05 +1000
Nicholas Piggin wrote:
> On Wed, 4 Apr 2018 20:00:52 -0700
> Dan Williams wrote:
>
> > [ adding Matthew, Christoph, and Tony ]
> >
> > On Wed, Apr 4, 2018 at 4:57 PM, Nicholas Piggin wrote:
> > > On Thu, 5 Apr 20
On Wed, 4 Apr 2018 07:21:32 -0700
Dan Williams wrote:
> On Wed, Apr 4, 2018 at 7:04 AM, Oliver wrote:
> > On Wed, Apr 4, 2018 at 10:07 PM, Balbir Singh
> > wrote:
> >> On Tue, 3 Apr 2018 10:37:51 -0700
> >> Dan Williams wrote:
> >>
> >>
On Thu, 5 Apr 2018 09:49:00 +1000
Nicholas Piggin wrote:
> On Thu, 5 Apr 2018 09:19:41 +1000
> Balbir Singh wrote:
>
> > The code currently assumes PAGE_SHIFT as the shift value of
> > the pfn, this works correctly (mostly) for user space pages,
> > but the corre
does
not print any error message as the error is treated as
returned via a return value and handled.
Signed-off-by: Balbir Singh
---
arch/powerpc/include/asm/mce.h | 3 +-
arch/powerpc/kernel/mce.c | 77 --
2 files changed, 77 insertions(+), 3
, largely
to keep the patch simple. If needed those optimizations
can be folded in.
Signed-off-by: Balbir Singh
---
arch/powerpc/include/asm/string.h | 2 +
arch/powerpc/lib/Makefile | 2 +-
arch/powerpc/lib/memcpy_mcsafe_64.S | 212
3 files
sical address still use PAGE_SHIFT for
computation. handle_ierror() is not modified and handle_derror()
is modified just for extracting the correct instruction
address.
Fixes: ba41e1e1ccb9 ("powerpc/mce: Hookup derror (load/store) UE errors")
Signed-off-by: Balbir Singh
---
arch
memcpy_mcsafe
via ioctls.
Balbir Singh (3):
powerpc/mce: Bug fixes for MCE handling in kernel space
powerpc/memcpy: Add memcpy_mcsafe for pmem
powerpc/mce: Handle memcpy_mcsafe
arch/powerpc/include/asm/mce.h | 3 +-
arch/powerpc/include/asm/string.h | 2 +
arch/powerpc/kernel/mce.c
On Wed, 4 Apr 2018 00:24:15 +1000
Oliver O'Halloran wrote:
> Scan the devicetree for an nvdimm-bus compatible and create
> a platform device for them.
>
> Signed-off-by: Oliver O'Halloran
> ---
Acked-by: Balbir Singh
. Since we
don't have the ACPI abstractions, the nmem region would need to add the
ability for a driver to have a phandle to the interleaving and nmem properties.
I guess that would be a separate driver, that would manage the nmem devices
and there would be a way to relate the pmem and nmems. Oliver?
Balbir Singh.
a pointer to the relevant node in the descriptor.
>
> Signed-off-by: Oliver O'Halloran
> Acked-by: Dan Williams
> ---
Acked-by: Balbir Singh
of number of cpus and the
callee should figure this out on its own? May be not in this series, but
in the longer run.
Balbir Singh.
does
not print any error message as the error is treated as
returned via a return value and handled.
Signed-off-by: Balbir Singh
---
arch/powerpc/include/asm/mce.h | 3 +-
arch/powerpc/kernel/mce.c | 77 --
2 files changed, 77 insertions(+), 3
, largely
to keep the patch simple. If needed those optimizations
can be folded in.
Signed-off-by: Balbir Singh
---
arch/powerpc/include/asm/string.h | 2 +
arch/powerpc/lib/Makefile | 2 +-
arch/powerpc/lib/memcpy_mcsafe_64.S | 212
3 files
1 - 100 of 1077 matches
Mail list logo