On 9/1/20 8:55 AM, Anshuman Khandual wrote:
On 08/27/2020 01:34 PM, Aneesh Kumar K.V wrote:
pte_clear_tests operate on an existing pte entry. Make sure that is not a none
pte entry.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 6 --
1 file changed, 4 insertions(+), 2 de
On 9/1/20 9:11 AM, Anshuman Khandual wrote:
On 08/27/2020 01:34 PM, Aneesh Kumar K.V wrote:
This will help in adding proper locks in a later patch
It really makes sense to classify these tests here as static and dynamic.
Static are the ones that test via page table entry values modification
Le 01/09/2020 à 08:25, Aneesh Kumar K.V a écrit :
On 9/1/20 8:52 AM, Anshuman Khandual wrote:
There is a checkpatch.pl warning here.
WARNING: Possible unwrapped commit description (prefer a maximum 75
chars per line)
#7:
Architectures like ppc64 use deposited page table while updating t
The following random segfault is observed from time to time with
map_hugetlb selftest:
root@localhost:~# ./map_hugetlb 1 19
524288 kB hugepages
Mapping 1 Mbytes
Segmentation fault
[ 31.219972] map_hugetlb[365]: segfault (11) at 117 nip 77974f8c lr 779a6834
code 1 in ld-2.23.so[77966000+21000]
The displayed size is in bytes while the text says it is in kB.
Shift it by 10 to really display kBytes.
Fixes: fa7b9a805c79 ("tools/selftest/vm: allow choosing mem size and page size
in map_hugetlb")
Cc: sta...@vger.kernel.org
Signed-off-by: Christophe Leroy
---
tools/testing/selftests/vm/map
The 8xx has 4 page sizes: 4k, 16k, 512k and 8M
4k and 16k can be selected at build time as standard page sizes,
and 512k and 8M are hugepages.
When 4k standard pages are selected, 16k pages are not available.
Allow 16k pages as hugepages when 4k pages are used.
To allow that, implement arch_mak
On 8xx, the number of entries occupied by a PTE in the page tables
depends on the size of the page. At the time being, this calculation
is done in two places: in pte_update() and in set_huge_pte_at()
Refactor this calculation into a helper called
number_of_cells_per_pte(). For the time being, the
On Mon, Aug 31, 2020 at 11:14:18AM +1000, Nicholas Piggin wrote:
> Excerpts from Michal Suchánek's message of August 31, 2020 6:11 am:
> > Hello,
> >
> > on POWER8 KVM hosts lock up since commit 10d91611f426 ("powerpc/64s:
> > Reimplement book3s idle code in C").
> >
> > The symptom is host locki
On Mon, Aug 31, 2020 at 11:14:18AM +1000, Nicholas Piggin wrote:
> Excerpts from Michal Suchánek's message of August 31, 2020 6:11 am:
> > Hello,
> >
> > on POWER8 KVM hosts lock up since commit 10d91611f426 ("powerpc/64s:
> > Reimplement book3s idle code in C").
> >
> > The symptom is host locki
Hello,
On 7/8/20 5:24 PM, Christoph Hellwig wrote:
> Use the DMA API bypass mechanism for direct window mappings. This uses
> common code and speed up the direct mapping case by avoiding indirect
> calls just when not using dma ops at all. It also fixes a problem where
> the sync_* methods were
On 8/31/20 8:40 AM, Christoph Hellwig wrote:
> On Sun, Aug 30, 2020 at 11:04:21AM +0200, Cédric Le Goater wrote:
>> Hello,
>>
>> On 7/8/20 5:24 PM, Christoph Hellwig wrote:
>>> Use the DMA API bypass mechanism for direct window mappings. This uses
>>> common code and speed up the direct mapping ca
Am 31.08.20 um 03:14 schrieb Nicholas Piggin:
> Excerpts from Michal Suchánek's message of August 31, 2020 6:11 am:
>> Hello,
>>
>> on POWER8 KVM hosts lock up since commit 10d91611f426 ("powerpc/64s:
>> Reimplement book3s idle code in C").
>>
>> The symptom is host locking up completely after some
Am 31.08.20 um 03:14 schrieb Nicholas Piggin:
> So hwthread_state is never getting back to to HWTHREAD_IN_IDLE on
> those threads. I wonder what they are doing. POWER8 doesn't have a good
> NMI IPI and I don't know if it supports pdbg dumping registers from the
> BMC unfortunately.
> Do the mess
Michal Suchánek writes:
> On Mon, Aug 31, 2020 at 11:14:18AM +1000, Nicholas Piggin wrote:
>> Excerpts from Michal Suchánek's message of August 31, 2020 6:11 am:
>> > Hello,
>> >
>> > on POWER8 KVM hosts lock up since commit 10d91611f426 ("powerpc/64s:
>> > Reimplement book3s idle code in C").
>>
On 30.08.2020 23:59, Chris Packham wrote:
>
> On 31/08/20 9:41 am, Heiner Kallweit wrote:
>> On 30.08.2020 23:00, Chris Packham wrote:
>>> On 31/08/20 12:30 am, Nicholas Piggin wrote:
Excerpts from Chris Packham's message of August 28, 2020 8:07 am:
>>>
>>>
> I've also now seen the R
Ruediger Oertel writes:
> Am 31.08.20 um 03:14 schrieb Nicholas Piggin:
>> Excerpts from Michal Suchánek's message of August 31, 2020 6:11 am:
>>> Hello,
>>>
>>> on POWER8 KVM hosts lock up since commit 10d91611f426 ("powerpc/64s:
>>> Reimplement book3s idle code in C").
>>>
>>> The symptom is hos
Am 31.08.20 um 14:58 schrieb Michael Ellerman:
[...]
>> The machines are running in OPAL/PowerNV mode, with "ppc64_cpu --smt=off".
>> The number of VMs varies across the machines:
>> obs-power8-01: 18 instances, "-smp 16,threads=8"
>> obs-power8-02: 20 instances, "-smp 8,threads=8"
>> obs-power8-03
https://bugzilla.kernel.org/show_bug.cgi?id=205183
Michael Ellerman (mich...@ellerman.id.au) changed:
What|Removed |Added
Status|RESOLVED|CLOSED
--
Yo
> -Original Message-
> From: Linus Torvalds
> Sent: Saturday, August 29, 2020 4:20 PM
> To: Guenter Roeck
> Cc: Luc Van Oostenryck ; Herbert Xu
> ; Andrew Morton foundation.org>; Joerg Roedel ; Leo Li
> ; Zhang Wei ; Dan Williams
> ; Vinod Koul ; linuxppc-dev
> ; dma ; Linux
> Kernel M
From: Linus Torvalds
[ Upstream commit 0a4c56c80f90797e9b9f8426c6aae4c0cf1c9785 ]
Commit ef91bb196b0d ("kernel.h: Silence sparse warning in
lower_32_bits") caused new warnings to show in the fsldma driver, but
that commit was not to blame: it only exposed some very incorrect code
that tried to t
From: Linus Torvalds
[ Upstream commit 0a4c56c80f90797e9b9f8426c6aae4c0cf1c9785 ]
Commit ef91bb196b0d ("kernel.h: Silence sparse warning in
lower_32_bits") caused new warnings to show in the fsldma driver, but
that commit was not to blame: it only exposed some very incorrect code
that tried to t
VIOS partitions with SLI-4 enabled Emulex adapters will be capable of
driving IO in parallel through mulitple work queues or channels, and
with new hyperviosr firmware that supports multiple interrupt sources
an ibmvfc NPIV single initiator can be modified to exploit end to end
channelization in a
The vdso linker script is preprocessed on demand.
Adding it to 'targets' is enough to include the .cmd file.
Signed-off-by: Masahiro Yamada
---
arch/arm64/kernel/vdso/Makefile | 2 +-
arch/arm64/kernel/vdso32/Makefile | 2 +-
arch/nds32/kernel/vdso/Makefile | 2 +-
arch/powerpc/kernel
This RFC improves the performance of indirect mapping on all tested DMA
usages, based on a mlx5 device, ranging from 64k packages to 1-byte
packages, from 1 thread to 64 threads.
In all workloads tested, the performance of indirect mapping gets very
near to direct mapping case.
The whole thing is
Given a existing mapping with 'current' direction, and a 'wanted'
direction for using that mapping, check if 'wanted' is satisfied by
'current'.
current accepts
DMA_BIDIRECTIONAL DMA_BIDIRECTIONAL, DMA_TO_DEVICE,
DMA_FROM_DEVICE, DMA_NONE
DMA_TO_DEVICE
In pseries, mapping a DMA page for a cpu memory range requires a
H_PUT_TCE* hypercall, and unmapping requires a H_STUFF_TCE hypercall.
When doing a lot of I/O, a thread can spend a lot of time doing such
hcalls, specially when a DMA mapping don't get reused.
The purpose of this change is to introd
From: YueHaibing
Date: Sat, 29 Aug 2020 19:58:23 +0800
> Add missing MODULE_DESCRIPTION.
>
> Signed-off-by: YueHaibing
Applied.
https://bugzilla.kernel.org/show_bug.cgi?id=209029
--- Comment #2 from Erhard F. (erhar...@mailbox.org) ---
No change with 5.9-rc3.
--
You are receiving this mail because:
You are watching the assignee of the bug.
The boundary_size might be as large as ULONG_MAX, which means
that a device has no specific boundary limit. So "+ 1" would
potentially overflow.
Also, by following other places in the kernel, boundary_size
should align with the PAGE_SIZE before right shifting by the
PAGE_SHIFT. However, passing it
The boundary_size might be as large as ULONG_MAX, which means
that a device has no specific boundary limit. So either "+ 1"
or passing it to ALIGN() would potentially overflow.
According to kernel defines:
#define ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask))
#define ALIGN(x, a) ALIGN_MAS
The boundary_size might be as large as ULONG_MAX, which means
that a device has no specific boundary limit. So either "+ 1"
or passing it to ALIGN() would potentially overflow.
According to kernel defines:
#define ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask))
#define ALIGN(x, a) ALIGN_MAS
For this resend
The original series have not been acked at any patch. So I am
resending them, being suggested by Niklas.
Coverletter
We are expending the default DMA segmentation boundary to its
possible maximum value (ULONG_MAX) to indicate that a device
doesn't specify a boun
The boundary_size might be as large as ULONG_MAX, which means
that a device has no specific boundary limit. So either "+ 1"
or passing it to ALIGN() would potentially overflow.
According to kernel defines:
#define ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask))
#define ALIGN(x, a) ALIGN_MAS
The boundary_size might be as large as ULONG_MAX, which means
that a device has no specific boundary limit. So either "+ 1"
or passing it to ALIGN() would potentially overflow.
According to kernel defines:
#define ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask))
#define ALIGN(x, a) ALIGN_MAS
The boundary_size might be as large as ULONG_MAX, which means
that a device has no specific boundary limit. So either "+ 1"
or passing it to ALIGN() would potentially overflow.
According to kernel defines:
#define ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask))
#define ALIGN(x, a) ALIGN_MAS
The boundary_size might be as large as ULONG_MAX, which means
that a device has no specific boundary limit. So either "+ 1"
or passing it to ALIGN() would potentially overflow.
According to kernel defines:
#define ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask))
#define ALIGN(x, a) ALIGN_MAS
Hi Tyrel,
I love your patch! Yet something to improve:
[auto build test ERROR on powerpc/next]
[also build test ERROR on mkp-scsi/for-next scsi/for-next v5.9-rc3
next-20200828]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '-
VIOS partitions with SLI-4 enabled Emulex adapters will be capable of
driving IO in parallel through mulitple work queues or channels, and
with new hyperviosr firmware that supports multiple interrupt sources
an ibmvfc NPIV single initiator can be modified to exploit end to end
channelization in a
On Thu, Jul 30, 2020 at 10:52:59PM +1000, Herbert Xu wrote:
> There are two __packed attributes in qman.h that are both unnecessary
> and causing compiler warnings because they're conflicting with
> explicit alignment requirements set on members within the structure.
>
> This patch removes them bo
On 1/09/20 12:33 am, Heiner Kallweit wrote:
> On 30.08.2020 23:59, Chris Packham wrote:
>> On 31/08/20 9:41 am, Heiner Kallweit wrote:
>>> On 30.08.2020 23:00, Chris Packham wrote:
On 31/08/20 12:30 am, Nicholas Piggin wrote:
> Excerpts from Chris Packham's message of August 28, 2020 8:07
> -Original Message-
> From: Herbert Xu
> Sent: Thursday, July 30, 2020 7:53 AM
> To: Leo Li ; linuxppc-dev@lists.ozlabs.org; linux-arm-
> ker...@lists.infradead.org
> Subject: [PATCH] soc: fsl: Remove bogus packed attributes from qman.h
>
> There are two __packed attributes in qman.h
On Tue, Sep 01, 2020 at 01:50:38AM +, Leo Li wrote:
>
> Sorry for the late response. I missed this email previously.
>
> These structures are descriptors used by hardware, we cannot have _ANY_
> padding from the compiler. The compiled result might be the same with or
> without the __packed
On 1/09/20 12:33 am, Heiner Kallweit wrote:
> On 30.08.2020 23:59, Chris Packham wrote:
>> On 31/08/20 9:41 am, Heiner Kallweit wrote:
>>> On 30.08.2020 23:00, Chris Packham wrote:
On 31/08/20 12:30 am, Nicholas Piggin wrote:
> Excerpts from Chris Packham's message of August 28, 2020 8:07
On 08/27/2020 01:34 PM, Aneesh Kumar K.V wrote:
> With the hash page table, the kernel should not use pmd_clear for clearing
> huge pte entries. Add a DEBUG_VM WARN to catch the wrong usage.
>
> Signed-off-by: Aneesh Kumar K.V
> ---
> arch/powerpc/include/asm/book3s/64/pgtable.h | 14
On 08/27/2020 01:34 PM, Aneesh Kumar K.V wrote:
> powerpc used to set the pte specific flags in set_pte_at(). This is different
> from other architectures. To be consistent with other architecture update
> pfn_pte to set _PAGE_PTE on ppc64. Also, drop now unused pte_mkpte.
>
> We add a VM_WARN_
On 08/27/2020 01:34 PM, Aneesh Kumar K.V wrote:
> ppc64 use bit 62 to indicate a pte entry (_PAGE_PTE). Avoid setting that bit
> in
> random value.
>
> Signed-off-by: Aneesh Kumar K.V
> ---
> mm/debug_vm_pgtable.c | 13 ++---
> 1 file changed, 10 insertions(+), 3 deletions(-)
>
> di
On 08/27/2020 01:34 PM, Aneesh Kumar K.V wrote:
> ppc64 supports huge vmap only with radix translation. Hence use arch helper
> to determine the huge vmap support.
>
> Signed-off-by: Aneesh Kumar K.V
> ---
> mm/debug_vm_pgtable.c | 15 +--
> 1 file changed, 13 insertions(+), 2 del
On 08/27/2020 01:34 PM, Aneesh Kumar K.V wrote:
> Saved write support was added to track the write bit of a pte after marking
> the
> pte protnone. This was done so that AUTONUMA can convert a write pte to
> protnone
> and still track the old write bit. When converting it back we set the pte
On 08/27/2020 01:34 PM, Aneesh Kumar K.V wrote:
> kernel expects entries to be marked huge before we use
> set_pmd_at()/set_pud_at().
>
> Signed-off-by: Aneesh Kumar K.V
> ---
> mm/debug_vm_pgtable.c | 21 -
> 1 file changed, 12 insertions(+), 9 deletions(-)
>
> diff --g
On 08/27/2020 01:34 PM, Aneesh Kumar K.V wrote:
> Architectures like ppc64 use deposited page table while updating the huge pte
> entries.
>
> Signed-off-by: Aneesh Kumar K.V
> ---
> mm/debug_vm_pgtable.c | 10 +++---
> 1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/mm/
On 08/27/2020 01:34 PM, Aneesh Kumar K.V wrote:
> pte_clear_tests operate on an existing pte entry. Make sure that is not a none
> pte entry.
>
> Signed-off-by: Aneesh Kumar K.V
> ---
> mm/debug_vm_pgtable.c | 6 --
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/mm/de
On 08/27/2020 01:34 PM, Aneesh Kumar K.V wrote:
> This will help in adding proper locks in a later patch
It really makes sense to classify these tests here as static and dynamic.
Static are the ones that test via page table entry values modification
without changing anything on the actual page
On 08/27/2020 01:34 PM, Aneesh Kumar K.V wrote:
> Make sure we call pte accessors with correct lock held.
>
> Signed-off-by: Aneesh Kumar K.V
> ---
> mm/debug_vm_pgtable.c | 34 --
> 1 file changed, 20 insertions(+), 14 deletions(-)
>
> diff --git a/mm/debug_v
On 08/27/2020 01:34 PM, Aneesh Kumar K.V wrote:
> The seems to be missing quite a lot of details w.r.t allocating
> the correct pgtable_t page (huge_pte_alloc()), holding the right
> lock (huge_pte_lock()) etc. The vma used is also not a hugetlb VMA.
>
> ppc64 do have runtime checks within CONF
https://bugzilla.kernel.org/show_bug.cgi?id=209029
Christophe Leroy (christophe.le...@csgroup.eu) changed:
What|Removed |Added
CC||christoph
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/vdso/gettimeofday.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/powerpc/include/asm/vdso/gettimeofday.h
b/arch/powerpc/include/asm/vdso/gettimeofday.h
index 59a609a48b63..8da84722729b 100644
--- a/arch/powerpc/include/as
Excerpts from Chris Packham's message of September 1, 2020 11:25 am:
>
> On 1/09/20 12:33 am, Heiner Kallweit wrote:
>> On 30.08.2020 23:59, Chris Packham wrote:
>>> On 31/08/20 9:41 am, Heiner Kallweit wrote:
On 30.08.2020 23:00, Chris Packham wrote:
> On 31/08/20 12:30 am, Nicholas Pigg
On 9/1/20 8:45 AM, Anshuman Khandual wrote:
On 08/27/2020 01:34 PM, Aneesh Kumar K.V wrote:
ppc64 use bit 62 to indicate a pte entry (_PAGE_PTE). Avoid setting that bit in
random value.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 13 ++---
1 file changed, 10 insert
On 9/1/20 8:51 AM, Anshuman Khandual wrote:
On 08/27/2020 01:34 PM, Aneesh Kumar K.V wrote:
kernel expects entries to be marked huge before we use
set_pmd_at()/set_pud_at().
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 21 -
1 file changed, 12 insertion
On 9/1/20 8:52 AM, Anshuman Khandual wrote:
On 08/27/2020 01:34 PM, Aneesh Kumar K.V wrote:
Architectures like ppc64 use deposited page table while updating the huge pte
entries.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 10 +++---
1 file changed, 7 insertions(+), 3
On 9/1/20 9:33 AM, Anshuman Khandual wrote:
On 08/27/2020 01:34 PM, Aneesh Kumar K.V wrote:
The seems to be missing quite a lot of details w.r.t allocating
the correct pgtable_t page (huge_pte_alloc()), holding the right
lock (huge_pte_lock()) etc. The vma used is also not a hugetlb VMA.
ppc6
61 matches
Mail list logo