On Mon, Oct 12, 2015 at 08:46:21AM +0200, Peter Zijlstra wrote:
> On Sun, Oct 11, 2015 at 06:25:20PM +0800, Boqun Feng wrote:
> > On Sat, Oct 10, 2015 at 09:58:05AM +0800, Boqun Feng wrote:
> > > Hi Peter,
> > >
> > > Sorry for replying late.
> > >
> > > On Thu, Oct 01, 2015 at 02:27:16PM +0200,
When configuring the MDIO subsystem it is also necessary to configure
the TBI register. Make sure the TBI is contained within the mapped
register range in order to:
a) make sure the address is computed correctly
b) make users aware that we're actually accessing that register
In case of error, prin
commit afae5ad78b342f401c28b0bb1adb3cd494cb125a
"net/fsl_pq_mdio: streamline probing of MDIO nodes"
added support for different types of MDIO devices:
1) Gianfar MDIO nodes that only map the MII registers
2) Gianfar MDIO nodes that map the full MDIO register set
3) eTSEC2 MDIO nodes (which map t
On Mon, Oct 12, 2015 at 09:17:50AM +0800, Boqun Feng wrote:
> On Thu, Oct 01, 2015 at 11:03:01AM -0700, Paul E. McKenney wrote:
> > On Thu, Oct 01, 2015 at 07:13:04PM +0200, Peter Zijlstra wrote:
> > > On Thu, Oct 01, 2015 at 08:09:09AM -0700, Paul E. McKenney wrote:
> > > > On Thu, Oct 01, 2015 at
On Wed, Sep 16, 2015 at 04:49:29PM +0100, Boqun Feng wrote:
> Some atomic operations now have _{relaxed, acquire, release} variants,
> this patch then adds some trivial tests for two purpose:
>
> 1.test the behavior of these new operations in single-CPU
> environment.
> 2.make their
On Sat, 2015-10-10 at 00:30 +0600, Alexander Kuleshov wrote:
> The provides memblock_is_memory() function that
> tries to find a given physical address in the memblock.memory.regions.
> Let's use this function instead of direct coding of the same functionality.
Are you sure it implements exactly
On Mon, Oct 12, 2015 at 10:30:34AM +0100, Will Deacon wrote:
> On Wed, Sep 16, 2015 at 04:49:29PM +0100, Boqun Feng wrote:
> > Some atomic operations now have _{relaxed, acquire, release} variants,
> > this patch then adds some trivial tests for two purpose:
> >
> > 1. test the behavior of these
On Thu, 2015-10-08 at 23:30 +0200, Arnd Bergmann wrote:
> On Friday 09 October 2015 08:09:12 Michael Ellerman wrote:
> > Currently the NR_IRQS option sits at the top level, which is ugly in
> > menuconfig. It's not something users will commonly need to worry about
> > so move it into "Kernel Option
On Thu, 2015-10-08 at 12:53 -0500, Rob Herring wrote:
> Enable building all dtb files when CONFIG_OF_ALL_DTBS is enabled. The dtbs
> are not really dependent on a platform being enabled or any other kernel
> config, so for testing coverage it is convenient to build all of the dtbs.
> This builds al
On Monday 12 October 2015 21:00:25 Michael Ellerman wrote:
> On Thu, 2015-10-08 at 23:30 +0200, Arnd Bergmann wrote:
> > On Friday 09 October 2015 08:09:12 Michael Ellerman wrote:
> > > Currently the NR_IRQS option sits at the top level, which is ugly in
> > > menuconfig. It's not something users w
On Mon, 2015-10-12 at 12:30 +0200, Arnd Bergmann wrote:
> On Monday 12 October 2015 21:00:25 Michael Ellerman wrote:
> > On Thu, 2015-10-08 at 23:30 +0200, Arnd Bergmann wrote:
> > > On Friday 09 October 2015 08:09:12 Michael Ellerman wrote:
> > > > Currently the NR_IRQS option sits at the top leve
On Fri, 2015-17-07 at 07:19:59 UTC, Christophe Jaillet wrote:
> If 'nvram_write_header' fails, then 'new_part' should be freed, otherwise,
> there is a memory leak.
>
> Signed-off-by: Christophe JAILLET
Applied to powerpc next, thanks.
https://git.kernel.org/powerpc/c/7d523187173294f6ae3b86a4
On Thu, 2015-01-10 at 09:46:06 UTC, Andy Shevchenko wrote:
> Extract a new module to share the code between other modules.
>
> There is no functional change.
>
> Signed-off-by: Andy Shevchenko
Series applied to powerpc next, thanks.
https://git.kernel.org/powerpc/c/948ad1acaf456b7213731cd9
ch
On Fri, 2015-17-07 at 07:20:00 UTC, Christophe Jaillet wrote:
> 'nvram_create_os_partition' should be 'nvram_create_partition'.
> Use __func__ to have it right, as done elsewhere in this file.
>
> Signed-off-by: Christophe JAILLET
Applied to powerpc next, thanks.
https://git.kernel.org/powerpc/
On Fri, 2015-12-06 at 06:57:11 UTC, Denis Kirjanov wrote:
> Fix the memory leak in create_gatt_table:
> we've lost a kfree on the exit path for the pages array allocated
> in uninorth_create_gatt_table
>
> Signed-off-by: Denis Kirjanov
Applied to powerpc next, thanks.
https://git.kernel.org/pow
On Fri, 2015-09-10 at 03:02:21 UTC, "Aneesh Kumar K.V" wrote:
> We need to properly identify whether a hugepage is an explicit or
> a transparent hugepage in follow_huge_addr(). We used to depend
> on hugepage shift argument to do that. But in some case that can
> result in wrong results. For ex:
>
On Mon, 2015-07-09 at 07:23:53 UTC, "Aneesh Kumar K.V" wrote:
> After commit e2b3d202d1dba8f3546ed28224ce485bc50010be
> ("powerpc: Switch 16GB and 16MB explicit hugepages to a
> different page table format"), we don't need to support
> is_hugepd() for 64K page size.
>
> Signed-off-by: Aneesh Kumar
On Thu, 2015-08-10 at 19:00:58 UTC, Colin King wrote:
> From: Colin Ian King
>
> pi_buff is being memset before it is sanity checked. Move the
> memset after the null pi_buff sanity check to avoid an oops.
>
> Signed-off-by: Colin Ian King
Applied to powerpc next, thanks.
https://git.kernel.o
On Thu, 2015-08-10 at 07:59:28 UTC, "Aneesh Kumar K.V" wrote:
> This avoid errors like
>
> unsigned int usize = 1 << 30;
> int size = 1 << 30;
> unsigned long addr = 64UL << 30 ;
>
> value = _ALIGN_DOWN(addr, usize); -> 0
> value = _ALIGN_DOWN(addr, size);
On Wed, 2015-16-09 at 19:26:14 UTC, Denis Kirjanov wrote:
> During the MSI bitmap test on boot kmemleak spews the following trace:
>
> unreferenced object 0xc0016e86c900 (size 64):
> comm "swapper/0", pid 1, jiffies 4294893173 (age 518.024s)
> hex dump (first 32 bytes):
> 00 00 0
On Fri, 2015-21-08 at 11:05:15 UTC, Christophe Leroy wrote:
> show_interrupts() expects the irq_chip name to be max 8 characters
> otherwise everything get misaligned
>
> # cat /proc/interrupts
>CPU0
> 17: 0 CPM PIC 0 Level error
> 19: 0 MPC8XX SIU 15 Leve
On Wed, 2015-22-07 at 05:54:29 UTC, Samuel Mendoza-Jonas wrote:
> Always include a timeout when waiting for secondary cpus to enter OPAL
> in the kexec path, rather than only when crashing.
>
> Signed-off-by: Samuel Mendoza-Jonas
Applied to powerpc next, thanks.
https://git.kernel.org/powerpc/c
On Monday 12 October 2015 22:07:45 Michael Ellerman wrote:
> Yeah, this builds and boots at least on pseries KVM.
>
> diff --git a/arch/powerpc/include/asm/irq.h b/arch/powerpc/include/asm/irq.h
> index e8e3a0a04eb0..35fba282b7f9 100644
> --- a/arch/powerpc/include/asm/irq.h
> +++ b/arch/powerpc/i
Hi,
This is v3 of the series.
Link for v1: https://lkml.org/lkml/2015/8/27/798
Link for v2: https://lkml.org/lkml/2015/9/16/527
Paul, Peter and Will, thank you all for the comments and suggestions,
that's really a lot of fun to discuss these with you and very
enlightening to me ;-)
Changes sinc
According to memory-barriers.txt, xchg, cmpxchg and their atomic{,64}_
versions all need to imply a full barrier, however they are now just
RELEASE+ACQUIRE, which is not a full barrier.
So replace PPC_RELEASE_BARRIER and PPC_ACQUIRE_BARRIER with
PPC_ATOMIC_ENTRY_BARRIER and PPC_ATOMIC_EXIT_BARRIER
Some atomic operations now have _{relaxed, acquire, release} variants,
this patch then adds some trivial tests for two purpose:
1. test the behavior of these new operations in single-CPU
environment.
2. make their code generated before we actually use them somewhere,
so t
Some architectures may have their special barriers for acquire, release
and fence semantics, so that general memory barriers(smp_mb__*_atomic())
in the default __atomic_op_*() may be too strong, so allow architectures
to define their own helpers which can overwrite the default helpers.
Signed-off-
On powerpc, acquire and release semantics can be achieved with
lightweight barriers("lwsync" and "ctrl+isync"), which can be used to
implement __atomic_op_{acquire,release}.
For release semantics, since we only need to ensure all memory accesses
that issue before must take effects before the -stor
Implement xchg_relaxed and atomic{,64}_xchg_relaxed, based on these
_relaxed variants, release/acquire variants and fully ordered versions
can be built.
Note that xchg_relaxed and atomic_{,64}_xchg_relaxed are not compiler
barriers.
Signed-off-by: Boqun Feng
---
arch/powerpc/include/asm/atomic.
Implement cmpxchg{,64}_relaxed and atomic{,64}_cmpxchg_relaxed, based on
which _release variants can be built.
To avoid superfluous barriers in _acquire variants, we implement these
operations with assembly code rather use __atomic_op_acquire() to build
them automatically.
For the same reason, we
Oops.. sorry. I will resend this one with correct address list.
On Mon, Oct 12, 2015 at 10:14:01PM +0800, Boqun Feng wrote:
> According to memory-barriers.txt, xchg, cmpxchg and their atomic{,64}_
> versions all need to imply a full barrier, however they are now just
> RELEASE+ACQUIRE, which is no
According to memory-barriers.txt, xchg, cmpxchg and their atomic{,64}_
versions all need to imply a full barrier, however they are now just
RELEASE+ACQUIRE, which is not a full barrier.
So replace PPC_RELEASE_BARRIER and PPC_ACQUIRE_BARRIER with
PPC_ATOMIC_ENTRY_BARRIER and PPC_ATOMIC_EXIT_BARRIER
On 06.10.2015 [14:19:43 +1100], David Gibson wrote:
> On Fri, Oct 02, 2015 at 10:18:00AM -0700, Nishanth Aravamudan wrote:
> > We will leverage this macro in the NVMe driver, which needs to know the
> > configured IOMMU page shift to properly configure its device's page
> > size.
> >
> > Signed-of
On 06.10.2015 [02:51:36 -0700], Christoph Hellwig wrote:
> Do we need a function here or can we just have a IOMMU_PAGE_SHIFT define
> with an #ifndef in common code?
I suppose we could do that -- I wasn't sure if the macro would be
palatable.
> Also not all architectures use dma-mapping-common.h
Le 08/10/2015 21:12, Scott Wood a écrit :
On Wed, 2015-10-07 at 14:49 +0200, Christophe Leroy wrote:
Le 29/09/2015 02:29, Scott Wood a écrit :
On Tue, Sep 22, 2015 at 06:51:13PM +0200, Christophe Leroy wrote:
flush/clean/invalidate _dcache_range() functions are all very
similar and are quite
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/{pte-hash32.h => book3s/32/hash.h} | 0
arch/powerpc/include/asm/{pte-hash64.h => book3s/64/hash.h} | 0
arch/powerpc/include/asm/pgtable-ppc32.h| 2 +-
arch/powerpc/include/asm/pgtable-ppc64.h| 2
Splitting this so that rename can track changes to file. Before merging
we will fold this
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/32/hash.h | 6 +++---
.../include/asm/{pte-hash64-4k.h => book3s/64/hash-4k.h} | 1 -
.../include/asm/{pte-ha
Hi All,
This patch series attempt to update book3s 64 linux page table format to
make it more flexible. Our current pte format is very restrictive and we
overload multiple pte bits. This is due to the non-availability of free bits
in pte_t. We use pte_t to track the validity of 4K subpages. This p
Keep it separate to make rebasing easier
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/32/pgtable.h | 4 ++--
arch/powerpc/include/asm/book3s/64/pgtable.h | 6 +++---
arch/powerpc/include/asm/pgtable-ppc32.h | 2 --
arch/powerpc/include/asm/pgtable-ppc64.h | 4
In this patch we do:
cp pgtable-ppc32.h book3s/32/pgtable.h
cp pgtable-ppc64.h book3s/64/pgtable.h
This enable us to do further changes to hash specific config.
We will change the page table format for 64bit hash in later patches.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/boo
We also convert few #define to static inline in this patch for better
type checking
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/pgtable.h | 112 --
arch/powerpc/include/asm/page.h | 10 ++-
arch/powerpc/include/asm/pgtable-book3e.h |
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/{pgtable-ppc32.h => nohash/32/pgtable.h} | 0
arch/powerpc/include/asm/{pgtable-ppc64.h => nohash/64/pgtable.h} | 2 +-
arch/powerpc/include/asm/nohash/pgtable.h | 8
3 files changed, 5 insertions(+), 5
functions which operate on pte bits are moved to hash*.h and other
generic functions are moved to pgtable.h
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/32/pgtable.h | 177
arch/powerpc/include/asm/book3s/64/hash.h| 144 +++
arc
This further make a copy of pte defines to book3s/64/hash*.h. This
remove the dependency on ppc64-4k.h and ppc64-64k.h
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/hash-4k.h | 87 ++-
arch/powerpc/include/asm/book3s/64/hash-64k.h | 46 ++
We also move __ASSEMBLY__ towards the end of header. This avoid
having #ifndef __ASSEMBLY___ all over the header
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/32/pgtable.h | 93 +++-
arch/powerpc/include/asm/book3s/64/pgtable.h | 88 -
This enables us to keep hash64 related bits together, and makes it easy
to follow.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/hash.h| 452 ++-
arch/powerpc/include/asm/book3s/64/pgtable.h | 449 +-
arch/powerpc/inclu
We convert them static inline function here as we did with pte_val in
the previous patch
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/32/pgtable.h | 6 -
arch/powerpc/include/asm/book3s/64/hash-4k.h | 6 -
arch/powerpc/include/asm/book3s/64/pgtable.h | 36 +++
We are going to drop pte_common.h in the later patch. The idea is to
enable hash code not require to define all PTE bits. Having PTE bits
defined in pte_common.h made the code unnecessarily complex.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/pgtable.h | 176 +
We copy only needed PTE bits define from pte-common.h to respective
hash related header. This should greatly simply later patches in which
we are going to change the pte format for hash config
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/hash-4k.h | 1 +
arch/powerpc/
Move the booke related headers below booke/32 or booke/64
We are splitting this change into multiple patch to make the rebasing
easier. The following patches can be folded into this if needed.
They are kept separate for easier review.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/nohash/32/pgtable.h | 16
arch/powerpc/include/asm/{ => nohash/32}/pte-40x.h | 0
arch/powerpc/include/asm/{ => nohash/32}/pte-44x.h | 0
arch/powerpc/include/asm/{ => nohash/32}/pte-8xx.h
Signed-off-by: Aneesh Kumar K.V
---
.../include/asm/{pgtable-ppc64-4k.h => nohash/64/pgtable-4k.h} | 0
.../asm/{pgtable-ppc64-64k.h => nohash/64/pgtable-64k.h} | 0
arch/powerpc/include/asm/nohash/64/pgtable.h | 10 +-
3 files changed, 5 insertions(+), 5 deletio
Currently we use 4 bits for each slot and pack all the 16 slot
information related to a 64K linux page in a 64bit value. To do this
we use 16 bits of pte_t. Move the hash slot valid bit out of pte_t
and place them in the second half of pte page. We also use 8 bit
per each slot.
Signed-off-by: Anee
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/nohash/32/pte-40x.h | 6 +++---
arch/powerpc/include/asm/nohash/32/pte-44x.h | 6 +++---
arch/powerpc/include/asm/nohash/32/pte-8xx.h | 6 +++---
arch/powerpc/include/asm/nohash/32/pte-fsl-booke.h | 6 +++---
arch/powe
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/Makefile| 3 +
arch/powerpc/mm/hash64_64k.c| 202 +
arch/powerpc/mm/hash_low_64.S | 380
arch/powerpc/mm/hash_utils_64.c | 4 +-
4 files changed, 208 insertions(+), 3
We will use this in the later patch to compute the right hash index
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/hash-64k.h | 2 +-
arch/powerpc/include/asm/book3s/64/pgtable.h | 4 ++--
arch/powerpc/include/asm/nohash/64/pgtable.h | 4 ++--
arch/powerpc/mm/hash64_64k
W.r.t hugetlb, we support two format for pmd. With book3s_64 and
64K linux page size, we can have pte at the pmd level. Hence we
don't need to support hugepd there. For everything else hugepd
is supported and pmd_huge is (0).
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64
No real change, only style changes
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/hash.h | 26 +-
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/hash.h
b/arch/powerpc/include/asm/book3s/64/hash.h
Convert from asm to C
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/hash-64k.h | 3 +-
arch/powerpc/include/asm/book3s/64/hash.h | 1 +
arch/powerpc/mm/hash64_64k.c | 134 +++-
arch/powerpc/mm/hash_low_64.S | 290 +
This free up 11 bits in pte_t. In the later patch we also change
the pte_t format so that we can start supporting migration pte
at pmd level.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/hash-4k.h | 10 +
arch/powerpc/include/asm/book3s/64/hash-64k.h | 29 ++---
For a pte entry we will have _PAGE_PTE set. Our pte page
address have a minimum alignment requirement of HUGEPD_SHIFT_MASK + 1.
We use the lower 7 bits to indicate hugepd. ie.
For pmd and pgd we can find:
1) _PAGE_PTE set pte -> indicate PTE
2) bits [2..6] non zero -> indicate hugepd.
They also
We will use the increased size to store more information of 4K pte
when using 64K page size. The idea is to free up bits in pte_t.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/pgalloc-64.h | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/power
This is similar to 64K insert. May be we want to consolidate
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/Makefile| 6 +-
arch/powerpc/mm/hash64_4k.c | 139 +
arch/powerpc/mm/hash_low_64.S | 331
arch/powerpc/mm/hash
We should not expect pte bit position in asm code. Simply
by moving part of that to C
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/kernel/exceptions-64s.S | 16 +++-
arch/powerpc/mm/hash_utils_64.c | 29 +
2 files changed, 32 insertions(+), 13 del
Only difference here is, we apply the WIMG mapping early, so rflags
passed to updatepp will also be changed.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/hash64_4k.c | 5 -
arch/powerpc/mm/hash64_64k.c | 10 --
arch/powerpc/mm/hash_utils_64.c | 13 ++
We support THP only with book3s_64 and 64K page size. Move
THP details to hash64-64k.h to clarify the same.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/hash-64k.h | 126 +
arch/powerpc/include/asm/book3s/64/hash.h | 223 +--
arch/pow
Instead of open coding it in multiple code paths, export the helper
and add more documentation. Also make sure we don't make assumption
regarding pte bit position
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/hash.h | 1 +
arch/powerpc/mm/hash64_4k.c | 13
On 06.10.2015 [02:51:36 -0700], Christoph Hellwig wrote:
> Do we need a function here or can we just have a IOMMU_PAGE_SHIFT define
> with an #ifndef in common code?
On Power, since it's technically variable, we'd need a function. So are
you suggesting define'ing it to a function just on Power and
On 12.10.2015 [09:03:52 -0700], Nishanth Aravamudan wrote:
> On 06.10.2015 [14:19:43 +1100], David Gibson wrote:
> > On Fri, Oct 02, 2015 at 10:18:00AM -0700, Nishanth Aravamudan wrote:
> > > We will leverage this macro in the NVMe driver, which needs to know the
> > > configured IOMMU page shift t
On Mon, Oct 12, 2015 at 5:53 AM, Michael Ellerman wrote:
> On Sun, 2015-10-11 at 15:13 +0300, Ran Shalit wrote:
>> Hello,
>>
>> Is it possible to register an interrupt (in linux), without using the
>> automatic clear of interrupt.
>> I need this just for testing.
>
> Hi Ran,
>
> You need to give u
On Fri, 2015-10-09 at 08:09 +1100, Michael Ellerman wrote:
> In general platforms are a more important configuration decision than
> cpus, so the platforms should come first.
>
> My basis for saying that is that our cpu selection options are generally
> just about tuning for a cpu, rather than ena
Gavin Shan writes:
Hi Gavin,
> Currently, we rely on the existence of struct pci_driver::err_handler
> to judge if the corresponding PCI device should be unplugged during
> EEH recovery (partially hotplug case). However, it's not elaborate.
> some device drivers are implementing part of the EEH
On Mon, Oct 12, 2015 at 09:17:50AM +0800, Boqun Feng wrote:
> Hi Paul,
>
> On Thu, Oct 01, 2015 at 11:03:01AM -0700, Paul E. McKenney wrote:
> > On Thu, Oct 01, 2015 at 07:13:04PM +0200, Peter Zijlstra wrote:
> > > On Thu, Oct 01, 2015 at 08:09:09AM -0700, Paul E. McKenney wrote:
> > > > On Thu, O
On Tue, Oct 13, 2015 at 09:55:53AM +1100, Daniel Axtens wrote:
>> Currently, we rely on the existence of struct pci_driver::err_handler
>> to judge if the corresponding PCI device should be unplugged during
>> EEH recovery (partially hotplug case). However, it's not elaborate.
>> some device driver
On Fri, Oct 09, 2015 at 07:33:28PM +0100, Will Deacon wrote:
> On Fri, Oct 09, 2015 at 10:43:27AM -0700, Paul E. McKenney wrote:
> > On Fri, Oct 09, 2015 at 10:51:29AM +0100, Will Deacon wrote:
> > > How do people feel about including these in memory-barriers.txt? I find
> > > them considerably eas
On Fri, Oct 09, 2015 at 10:46:53AM +0800, Wei Yang wrote:
>In current implementation, when VF BAR is bigger than 64MB, it uses 4 M64
>BARs in Single PE mode to cover the number of VFs required to be enabled.
>By doing so, several VFs would be in one VF Group and leads to interference
>between VFs i
On Fri, Oct 09, 2015 at 10:46:51AM +0800, Wei Yang wrote:
>On PHB_IODA2, we enable SRIOV devices by mapping IOV BAR with M64 BARs. If
>a SRIOV device's IOV BAR is not 64bit-prefetchable, this is not assigned
>from 64bit prefetchable window, which means M64 BAR can't work on it.
>
>The reason is PCI
On Mon, 2015-10-12 at 16:47 -0500, Scott Wood wrote:
> On Fri, 2015-10-09 at 08:09 +1100, Michael Ellerman wrote:
> > In general platforms are a more important configuration decision than
> > cpus, so the platforms should come first.
> >
> > My basis for saying that is that our cpu selection optio
On Fri, Oct 09, 2015 at 10:46:52AM +0800, Wei Yang wrote:
>The alignment of IOV BAR on PowerNV platform is the total size of the IOV
>BAR. No matter whether the IOV BAR is extended with number of
>roundup_pow_of_two(total_vfs) or number of max PE number (256), the total
>size could be calculated by
On Tue, 2015-10-13 at 00:09 +0530, Aneesh Kumar K.V wrote:
> Hi All,
>
> This patch series attempt to update book3s 64 linux page table format to
> make it more flexible. Our current pte format is very restrictive and we
> overload multiple pte bits. This is due to the non-availability of free bit
On Mon, 2015-10-12 at 13:50 +0200, Arnd Bergmann wrote:
> On Monday 12 October 2015 22:07:45 Michael Ellerman wrote:
> > Yeah, this builds and boots at least on pseries KVM.
> >
> > diff --git a/arch/powerpc/include/asm/irq.h b/arch/powerpc/include/asm/irq.h
> > index e8e3a0a04eb0..35fba282b7f9 10
Gavin Shan writes:
> + *
> + * When the PHB is fenced, we have to issue a reset to recover from
> + * the error. Override the result if necessary to have partially
> + * hotplug for this case.
>*/
> pr_info("EEH: Notify device drivers to shutdown\n");
> ee
onto today's next-20151012.
arch/powerpc/include/asm/systbl.h | 12
arch/powerpc/include/asm/unistd.h | 2 +-
arch/powerpc/include/uapi/asm/unistd.h | 12
3 files changed, 25 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/systbl.h
b
Commit 7a5692e6e533 ("arch/powerpc: provide zero_bytemask() for
big-endian") added a call to __fls() in our word-at-a-time.h. That was
fine for the kernel build but missed the fact that we also use
word-at-a-time.h in a userspace test.
Pulling in the kernel version of __fls() gets messy, so just d
On Tue, Oct 13, 2015 at 11:01:24AM +1100, Gavin Shan wrote:
>On Fri, Oct 09, 2015 at 10:46:51AM +0800, Wei Yang wrote:
>>On PHB_IODA2, we enable SRIOV devices by mapping IOV BAR with M64 BARs. If
>>a SRIOV device's IOV BAR is not 64bit-prefetchable, this is not assigned
>>from 64bit prefetchable wi
On Mon, Oct 12, 2015 at 12:25:24PM +0530, Benjamin Herrenschmidt wrote:
>On Mon, 2015-10-12 at 10:58 +0800, Wei Yang wrote:
>> On Fri, Oct 09, 2015 at 07:15:19PM +1100, Benjamin Herrenschmidt wrote:
>> > On Fri, 2015-10-09 at 10:46 +0800, Wei Yang wrote:
>> > > On PHB_IODA2, we enable SRIOV devices
On Tue, Oct 13, 2015 at 11:13:50AM +1100, Gavin Shan wrote:
>On Fri, Oct 09, 2015 at 10:46:52AM +0800, Wei Yang wrote:
>>The alignment of IOV BAR on PowerNV platform is the total size of the IOV
>>BAR. No matter whether the IOV BAR is extended with number of
>>roundup_pow_of_two(total_vfs) or numbe
Gavin Shan writes:
> Danienl, The issue is tracked by IBM's bugzilla 127612 reported from Nvida
> private GPU drivers. I tried to find the source code from upstream kernel,
> but failed.
OK. So I've read the internal bug, and I'm going to do my best to summarise
without including confidential in
On Tue, Oct 13, 2015 at 10:55:27AM +1100, Gavin Shan wrote:
>On Fri, Oct 09, 2015 at 10:46:53AM +0800, Wei Yang wrote:
>>In current implementation, when VF BAR is bigger than 64MB, it uses 4 M64
>>BARs in Single PE mode to cover the number of VFs required to be enabled.
>>By doing so, several VFs w
On Tue, Oct 13, 2015 at 09:49:30AM +0800, Wei Yang wrote:
>On Tue, Oct 13, 2015 at 11:01:24AM +1100, Gavin Shan wrote:
>>On Fri, Oct 09, 2015 at 10:46:51AM +0800, Wei Yang wrote:
>>>On PHB_IODA2, we enable SRIOV devices by mapping IOV BAR with M64 BARs. If
>>>a SRIOV device's IOV BAR is not 64bit-p
On Tue, Oct 13, 2015 at 10:45:45AM +0800, Wei Yang wrote:
>On Tue, Oct 13, 2015 at 11:13:50AM +1100, Gavin Shan wrote:
>>On Fri, Oct 09, 2015 at 10:46:52AM +0800, Wei Yang wrote:
>>>The alignment of IOV BAR on PowerNV platform is the total size of the IOV
>>>BAR. No matter whether the IOV BAR is ex
On Tue, Oct 13, 2015 at 10:50:42AM +0800, Wei Yang wrote:
>On Tue, Oct 13, 2015 at 10:55:27AM +1100, Gavin Shan wrote:
>>On Fri, Oct 09, 2015 at 10:46:53AM +0800, Wei Yang wrote:
>>>In current implementation, when VF BAR is bigger than 64MB, it uses 4 M64
>>>BARs in Single PE mode to cover the numb
On Wed, 2015-23-09 at 06:41:48 UTC, Daniel Axtens wrote:
> All unrecovered machine check errors on PowerNV should cause an
> immediate panic. There are 2 reasons that this is the right policy:
> it's not safe to continue, and we're already trying to reboot.
...
> Explicitly panic() on unrecovered M
On Thu, 2015-08-10 at 00:04:26 UTC, Cyril Bur wrote:
> native_hpte_clear() is called in real mode from two places:
> - Early in boot during htab initialisation if firmware assisted dump is
> active.
> - Late in the kexec path.
>
> In both contexts there is no need to disable interrupts are they
On Tue, Oct 13, 2015 at 02:20:30PM +1100, Gavin Shan wrote:
>On Tue, Oct 13, 2015 at 09:49:30AM +0800, Wei Yang wrote:
>>On Tue, Oct 13, 2015 at 11:01:24AM +1100, Gavin Shan wrote:
>>>On Fri, Oct 09, 2015 at 10:46:51AM +0800, Wei Yang wrote:
On PHB_IODA2, we enable SRIOV devices by mapping IOV
On Tue, Oct 13, 2015 at 02:27:52PM +1100, Gavin Shan wrote:
>On Tue, Oct 13, 2015 at 10:45:45AM +0800, Wei Yang wrote:
>>On Tue, Oct 13, 2015 at 11:13:50AM +1100, Gavin Shan wrote:
>>>On Fri, Oct 09, 2015 at 10:46:52AM +0800, Wei Yang wrote:
The alignment of IOV BAR on PowerNV platform is the t
When adding a vPHB in cxl_pci_vphb_add(), we allocate a pci_controller
struct using pcibios_alloc_controller(). However, we don't free it in
cxl_pci_vphb_remove(), causing a leak.
Call pcibios_free_controller() in cxl_pci_vphb_remove() to free the vPHB
data structure correctly.
Signed-off-by: Dan
On Tue, Oct 13, 2015 at 01:48:54PM +1100, Daniel Axtens wrote:
>Gavin Shan writes:
>
>> Danienl, The issue is tracked by IBM's bugzilla 127612 reported from Nvida
>> private GPU drivers. I tried to find the source code from upstream kernel,
>> but failed.
>
>OK. So I've read the internal bug, and
On Tue, Oct 13, 2015 at 12:43:23PM +1100, Daniel Axtens wrote:
>Gavin Shan writes:
>
>> + *
>> + * When the PHB is fenced, we have to issue a reset to recover from
>> + * the error. Override the result if necessary to have partially
>> + * hotplug for this case.
>> */
>>
On Tue, 2015-10-13 at 00:09 +0530, Aneesh Kumar K.V wrote:
> We convert them static inline function here as we did with pte_val in
> the previous patch
This breaks ppc40x_defconfig & 40x/ep405_defconfig with:
arch/powerpc/mm/40x_mmu.c: In function 'mmu_mapin_ram':
arch/powerpc/mm/40x_mmu.c:11
1 - 100 of 108 matches
Mail list logo