Hi Mike,
Mike Rapoport 於 2019年5月2日 週四 下午11:30寫道:
>
> The nds32 implementation of pte_alloc_one_kernel() differs from the generic
> in the use of __GFP_RETRY_MAYFAIL flag, which is removed after the
> conversion.
>
> The nds32 version of pte_alloc_one() missed the call to pgtable_page_ctor()
> and
So far the pseries platforms has always been using IOMMU making SWIOTLB
unnecessary. Now we want secure guests which means devices can only
access certain areas of guest physical memory; we are going to use
SWIOTLB for this purpose.
This allows SWIOTLB for pseries. By default there is no change in
The commit 8617a5c5bc00 ("powerpc/dma: handle iommu bypass in
dma_iommu_ops") merged direct DMA ops into the IOMMU DMA ops allowing
SWIOTLB as well but only for mapping; the unmapping and bouncing parts
were left unmodified.
This adds missing direct unmapping calls to .unmap_page() and .unmap_sg()
This is an attempt to allow PCI pass through to a secure guest when
hardware can only access insecure memory. This allows SWIOTLB use
for passed through devices.
Later on secure VMs will unsecure SWIOTLB bounce buffers for DMA
and the rest of the guest RAM will be unavailable to the hardware
by
From: Nicholas Piggin
[ Upstream commit f2910f0e6835339e6ce82cef22fa15718b7e3bfa ]
GCC 4.6 is the minimum supported now.
Signed-off-by: Nicholas Piggin
Reviewed-by: Joel Stanley
Signed-off-by: Michael Ellerman
Signed-off-by: Sasha Levin
---
arch/powerpc/Makefile | 31 ++
From: Nicholas Piggin
[ Upstream commit 88b9a3d1425a436e95c41f09986fdae2daee437a ]
The xmon debugger IPI handler waits in the callback function while
xmon is still active. This means they don't complete the IPI, and the
initiator always times out waiting for them.
Things manage to work after th
From: Nicholas Piggin
[ Upstream commit 1b5fc84aba170bdfe3533396ca9662ceea1609b7 ]
The NMI IPI timeout logic is broken, if __smp_send_nmi_ipi() times out
on the first condition, delay_us will be zero which will send it into
the second spin loop with no timeout so it will spin forever.
Fixes: 5b
On 7/5/19 12:43 pm, Christopher M. Riedl wrote:
Add support for disabling the kernel implemented spectre v2 mitigation
(count cache flush on context switch) via the nospectre_v2 cmdline
option.
Suggested-by: Michael Ellerman
Signed-off-by: Christopher M. Riedl
Reviewed-by: Andrew Donnellan
On PowerNV and pSeries, devices currently acquire EEH support from
several different places: Boot-time devices from eeh_probe_devices()
and eeh_addr_cache_build(), Virtual Function devices from the pcibios
bus add device hooks and hot plugged devices from pci_hp_add_devices()
(with other platforms
Now that EEH support for all devices (on PowerNV and pSeries) is
provided by the pcibios bus add device hooks, eeh_probe_devices() and
eeh_addr_cache_build() are redundant and can be removed.
Move the EEH enabled message into it's own function so that it can be
called from multiple places.
Note t
Also remove useless comment.
Signed-off-by: Sam Bobroff
Reviewed-by: Alexey Kardashevskiy
---
arch/powerpc/kernel/eeh.c| 2 +-
arch/powerpc/platforms/powernv/eeh-powernv.c | 14
arch/powerpc/platforms/pseries/eeh_pseries.c | 23 +++-
3 files cha
The EEH_DEV_NO_HANDLER flag is used by the EEH system to prevent the
use of driver callbacks in drivers that have been bound part way
through the recovery process. This is necessary to prevent later stage
handlers from being called when the earlier stage handlers haven't,
which can be confusing for
Hi all,
Here is v2, addressing feedback from v1.
Original cover letter follows, slightly updated for v2:
This patch set adds support for EEH recovery of hot plugged devices on pSeries
machines. Specifically, devices discovered by PCI rescanning using
/sys/bus/pci/rescan, which includes devices h
The EEH address cache is currently initialized and populated by a
single function: eeh_addr_cache_build(). While the initial population
of the cache can only be done once resources are allocated,
initialization (just setting up a spinlock) could be done much
earlier.
So move the initialization st
The pcibios_init() function for 64 bit PowerPC currently calls
pci_bus_add_devices() before pcibios_resource_survey(), which seems
incorrect because it adds devices and attempts to bind their drivers
before allocating their resources (although no problems seem to be
apparent).
So move the call to
On Fri, May 03, 2019 at 11:28:22PM +, Jason Gunthorpe wrote:
> On Fri, May 03, 2019 at 01:16:30PM -0700, Daniel Jordan wrote:
> > Andrew, this one patch replaces these six from [1]:
> >
> > mm-change-locked_vms-type-from-unsigned-long-to-atomic64_t.patch
> > vfio-type1-drop-mmap_sem-no
Add support for disabling the kernel implemented spectre v2 mitigation
(count cache flush on context switch) via the nospectre_v2 cmdline
option.
Suggested-by: Michael Ellerman
Signed-off-by: Christopher M. Riedl
---
v1->v2:
add call to toggle_count_cache_flush(false)
arch/powerpc/kern
The patch
ASoC: fsl_esai: Add pm runtime function
has been applied to the asoc tree at
https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git for-5.3
All being well this means that it will be integrated into the linux-next
tree (usually sometime in the next 24 hours) and sent t
The patch
ASoC: fsl_esai: Add pm runtime function
has been applied to the asoc tree at
https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git for-5.3
All being well this means that it will be integrated into the linux-next
tree (usually sometime in the next 24 hours) and sent t
The patch
ASoC: fsl_esai: Add pm runtime function
has been applied to the asoc tree at
https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git for-5.3
All being well this means that it will be integrated into the linux-next
tree (usually sometime in the next 24 hours) and sent t
On Mon, May 06, 2019 at 03:58:45PM -0600, Alex Williamson wrote:
> On Fri, 19 Apr 2019 17:37:17 +0200
> Greg Kurz wrote:
>
> > If vfio_pci_register_dev_region() fails then we should rollback
> > previous changes, ie. unmap the ATSD registers.
> >
> > Signed-off-by: Greg Kurz
> > ---
>
> Applie
> On May 6, 2019 at 9:29 PM Michael Ellerman wrote:
>
>
> Christopher M Riedl writes:
> >> On May 5, 2019 at 9:32 PM Andrew Donnellan wrote:
> >> On 6/5/19 8:10 am, Christopher M. Riedl wrote:
> >> > Add support for disabling the kernel implemented spectre v2 mitigation
> >> > (count cache f
Christopher M Riedl writes:
>> On May 5, 2019 at 9:32 PM Andrew Donnellan wrote:
>> On 6/5/19 8:10 am, Christopher M. Riedl wrote:
>> > Add support for disabling the kernel implemented spectre v2 mitigation
>> > (count cache flush on context switch) via the nospectre_v2 cmdline
>> > option.
>> >
On Fri, 19 Apr 2019 17:37:17 +0200
Greg Kurz wrote:
> If vfio_pci_register_dev_region() fails then we should rollback
> previous changes, ie. unmap the ATSD registers.
>
> Signed-off-by: Greg Kurz
> ---
Applied to vfio next branch for v5.2 with Alexey's R-b. Thanks!
Alex
> drivers/vfio/pci
There was NVMEM support added to of_get_mac_address, so it could now
return ERR_PTR encoded error values, so we need to adjust all current
users of of_get_mac_address to this new fact.
While at it, remove superfluous is_valid_ether_addr as the MAC address
returned from of_get_mac_address is always
Hi all,
Commit
04a1942933ce ("powerpc/mm: Fix hugetlb page initialization")
is missing a Signed-off-by from its author.
--
Cheers,
Stephen Rothwell
pgppfCUkeF4bI.pgp
Description: OpenPGP digital signature
https://bugzilla.kernel.org/show_bug.cgi?id=203517
Erhard F. (erhar...@mailbox.org) changed:
What|Removed |Added
Kernel Version|5.1.0-rc7 |5.1.0-rc1
--- Comment
https://bugzilla.kernel.org/show_bug.cgi?id=203515
--- Comment #6 from Erhard F. (erhar...@mailbox.org) ---
(In reply to Eric Biggers from comment #5)
> [...] That was almost a month ago though; I'm not sure whether anyone has
> actually done anything yet. I'll send a reminder.
Thanks! Apparently
On Thu, 02 May 2019 08:28:40 PDT (-0700), r...@linux.ibm.com wrote:
The only difference between the generic and RISC-V implementation of PTE
allocation is the usage of __GFP_RETRY_MAYFAIL for both kernel and user
PTEs and the absence of __GFP_ACCOUNT for the user PTEs.
The conversion to the gene
Hi Juliet,
Juliet Kim writes:
> Fix extending start/stop topology update scope during LPM
> Commit 65b9fdadfc4d ("powerpc/pseries/mobility: Extend start/stop
> topology update scope") made the change to the duration that
> topology updates are suppressed during LPM to allow the complete
> device
Hi!
On Mon, May 06, 2019 at 04:31:38PM +, Christophe Leroy wrote:
> However, I've tried your suggestion below and get unnexpected result.
> >you can do
> >
> > __asm__ __volatile__ ("dcbf %0" : : "Z"(addr) : "memory");
> >
> >to save some insns here and there. ]
This should be "dcbf %y0"
On 05/06/2019 04:33 AM, Michael Ellerman wrote:
Can you post an oops log? Just so if someone hits it they can possibly
recognise it from the back trace etc.
Sure. The system waa already at the mercy of the oom killer (for other reasons) and
finally ran out of things to kill. Here's the sta
Hi Segher,
On 05/03/2019 06:15 PM, Segher Boessenkool wrote:
Hi Christophe,
On Fri, May 03, 2019 at 04:14:13PM +0200, Christophe Leroy wrote:
A while ago I proposed the following patch, and didn't get any comment
back on it.
I didn't see it. Maybe because of holiday :-)
Thanks for this a
Hello Satheesh,
On 4/29/19 10:05 AM, Satheesh Rajendran wrote:
> On Wed, Apr 10, 2019 at 07:04:32PM +0200, Cédric Le Goater wrote:
>> Hello,
>>
>> GitHub trees available here :
>>
>> QEMU sPAPR:
>>
>> https://github.com/legoater/qemu/commits/xive-next
>>
>> Linux/KVM:
>>
>> https://github.c
There was NVMEM support added to of_get_mac_address, so it could now return
ERR_PTR encoded error values, so we need to adjust all current users of
of_get_mac_address to this new fact.
While at it, remove superfluous is_valid_ether_addr as the MAC address
returned from of_get_mac_address is always
On Sat, Apr 13, 2019 at 01:41:36PM +1000, Michael Ellerman wrote:
> Nayna writes:
>
> > On 04/11/2019 10:47 AM, Daniel Axtens wrote:
> >> Eric Biggers writes:
> >>
> >>> Are you still planning to fix the remaining bug? I booted a ppc64le VM,
> >>> and I
> >>> see the same test failure (I think
https://bugzilla.kernel.org/show_bug.cgi?id=203515
Eric Biggers (ebigge...@gmail.com) changed:
What|Removed |Added
CC||ebigge...@gmail.com
On Mon, May 06, 2019 at 11:58:35AM +0200, Petr Štetiar wrote:
> There was NVMEM support added to of_get_mac_address, so it could now return
> ERR_PTR encoded error values, so we need to adjust all current users of
> of_get_mac_address to this new fact.
We need a Fixes tag so we can look at the com
The patch
ASoC: fsl_esai: Add pm runtime function
has been applied to the asoc tree at
https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git for-5.3
All being well this means that it will be integrated into the linux-next
tree (usually sometime in the next 24 hours) and sent t
On Mon, May 06, 2019 at 09:34:55AM +0200, Rasmus Villemoes wrote:
> I _am_ bending the C rules a bit with the "extern some_var; asm
> volatile(".section some_section\nsome_var: blabla");". I should probably
> ask on the gcc list whether this way of defining a local symbol in
> inline assembly and r
On Mon, 2019-05-06 at 12:03:33 UTC, Sachin Sant wrote:
> This patch fixes a regression by using correct kernel config variable
> for HUGETLB_PAGE_SIZE_VARIABLE.
>
> Without this huge pages are disabled during kernel boot.
> [0.309496] hugetlbfs: disabling because there are no supported hugepage si
On Mon, 2019-05-06 at 08:10:43 UTC, Christophe Leroy wrote:
> commit b28c97505eb1 ("powerpc/64: Setup KUP on secondary CPUs")
> moved setup_kup() out of the __init section. As stated in that commit,
> "this is only for 64-bit". But this function is also used on PPC32,
> where the two functions call
On Mon, 2019-05-06 at 06:47:55 UTC, Christophe Leroy wrote:
> The patch identified below added pgtable-frag.o to obj-y
> but some merge witchery kept it also for obj-CONFIG_PPC_BOOK3S_64
>
> This patch clears the duplication.
>
> Fixes: 737b434d3d55 ("powerpc/mm: convert Book3E 64 to pte_fragment
On Mon, 2019-05-06 at 06:21:00 UTC, Christophe Leroy wrote:
> For unknown reason, the new Makefile added via the KASAN suppot patch
> didn't land into arch/powerpc/mm/kasan/
>
> This patch restores it.
>
> Fixes: 2edb16efc899 ("powerpc/32: Add KASAN support")
> Signed-off-by: Christophe Leroy
A
On Mon, 2019-05-06 at 06:21:01 UTC, Christophe Leroy wrote:
> In commit 17312f258cf6 ("powerpc/mm: Move book3s32 specifics in
> subdirectory mm/book3s64"), ppc_mmu_32.c was moved and renamed.
>
> This patch fixes Makefiles to disable KASAN instrumentation on
> the new name and location.
>
> Fixes
On Sat, 2019-05-04 at 07:04:30 UTC, Wei Yongjun wrote:
> In case of error, the function eventfd_ctx_fdget() returns ERR_PTR() and
> never returns NULL. The NULL test in the return value check should be
> replaced with IS_ERR().
>
> This issue was detected by using the Coccinelle software.
>
> Fix
> On May 5, 2019 at 9:32 PM Andrew Donnellan wrote:
>
>
> On 6/5/19 8:10 am, Christopher M. Riedl wrote:
> > Add support for disabling the kernel implemented spectre v2 mitigation
> > (count cache flush on context switch) via the nospectre_v2 cmdline
> > option.
> >
> > Suggested-by: Michael El
"Dmitry V. Levin" writes:
> syscall_get_error() is required to be implemented on this
> architecture in addition to already implemented syscall_get_nr(),
> syscall_get_arguments(), syscall_get_return_value(), and
> syscall_get_arch() functions in order to extend the generic
> ptrace API with PTRA
Le 06/05/2019 à 14:03, Sachin Sant a écrit :
This patch fixes a regression by using correct kernel config variable
for HUGETLB_PAGE_SIZE_VARIABLE.
Without this huge pages are disabled during kernel boot.
[0.309496] hugetlbfs: disabling because there are no supported hugepage sizes
Fixes: c57
This patch fixes a regression by using correct kernel config variable
for HUGETLB_PAGE_SIZE_VARIABLE.
Without this huge pages are disabled during kernel boot.
[0.309496] hugetlbfs: disabling because there are no supported hugepage sizes
Fixes: c5710cd20735 ("powerpc/mm: cleanup HPAGE_SHIFT setup"
Rick Lindsley writes:
> When the memset code was added to pgd_alloc(), it failed to consider
> that kmem_cache_alloc() can return NULL. It's uncommon, but not
> impossible under heavy memory contention.
Can you post an oops log? Just so if someone hits it they can possibly
recognise it from the b
Christophe Leroy writes:
> Le 26/04/2019 à 17:58, Christophe Leroy a écrit :
>> Book3E 64 is the only subarch not using pte_fragment. In order
>> to allow refactorisation, this patch converts it to pte_fragment.
>>
>> Reviewed-by: Aneesh Kumar K.V
>> Signed-off-by: Christophe Leroy
>> ---
>>
There was NVMEM support added to of_get_mac_address, so it could now return
ERR_PTR encoded error values, so we need to adjust all current users of
of_get_mac_address to this new fact.
While at it, remove superfluous is_valid_ether_addr as the MAC address
returned from of_get_mac_address is always
For Shared Processor LPARs, the POWER Hypervisor maintains a relatively
static mapping of the LPAR processors (vcpus) to physical processor
chips (representing the "home" node) and tries to always dispatch vcpus
on their associated physical processor chip. However, under certain
scenarios, vcpus ma
Since we would be introducing a new user of the DTL buffer in a
subsequent patch, add helpers to gatekeep use of the DTL buffer. The
current usage of the DTL buffer from debugfs is at a per-cpu level
(corresponding to the cpu debugfs file that is opened). Subsequently, we
will have users enabling/a
H_HOME_NODE_ASSOCIATIVITY hcall can take two different flags and return
different associativity information in each case. Generalize the
existing hcall_vphn() function to take flags as an argument and to
return the result. Update the only existing user to pass the proper
arguments.
Signed-off-by:
Introduce new helpers for DTL buffer allocation and registration and
have the existing code use those.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/plpar_wrappers.h | 2 +
arch/powerpc/platforms/pseries/lpar.c | 66 ---
arch/powerpc/platforms/pseries/setup.c
When CONFIG_VIRT_CPU_ACCOUNTING_NATIVE is enabled, we always initialize
DTL enable mask to DTL_LOG_PREEMPT (0x2). There are no other places
where the mask is changed. As such, when reading the DTL log buffer
through debugfs, there is no need to save and restore the previous mask
value.
We don't ne
This series adds a new procfs file /proc/powerpc/vcpudispatch_stats for
providing statistics around how the LPAR processors are dispatched by
the POWER Hypervisor, in a shared LPAR environment. Patch 6/6 has more
details on how the statistics are gathered.
An example output:
$ sudo cat /pro
Introduce macros to encode the DTL enable mask fields and use those
instead of hardcoding numbers.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/lppaca.h | 11 +++
arch/powerpc/platforms/pseries/dtl.c | 8 +---
arch/powerpc/platforms/pseries/lpar.c | 2 +-
arch/
On Tue, Apr 16, 2019 at 11:23 PM Arnd Bergmann wrote:
>
> Hi Al,
>
> It took me way longer than I had hoped to revisit this series, see
> https://lore.kernel.org/lkml/20180912150142.157913-1-a...@arndb.de/
> for the previously posted version.
>
> I've come to the point where all conversion handler
On Mon, May 06, 2019 at 10:46:11AM +0200, Frederic Barrat wrote:
> Hi,
>
> The PCI p2p and tunnel code is used by the Mellanox CX5 driver, at least
> their latest, out of tree version, which is used for CORAL. My
> understanding is that they'll upstream it at some point, though I don't
> know wh
Hi,
The PCI p2p and tunnel code is used by the Mellanox CX5 driver, at least
their latest, out of tree version, which is used for CORAL. My
understanding is that they'll upstream it at some point, though I don't
know what their schedule is like.
Fred
Le 26/04/2019 à 14:49, Christoph Hell
commit b28c97505eb1 ("powerpc/64: Setup KUP on secondary CPUs")
moved setup_kup() out of the __init section. As stated in that commit,
"this is only for 64-bit". But this function is also used on PPC32,
where the two functions called by setup_kup() are in the __init
section, so setup_kup() has to e
* Rasmus Villemoes wrote:
> I _am_ bending the C rules a bit with the "extern some_var; asm
> volatile(".section some_section\nsome_var: blabla");". I should
> probably ask on the gcc list whether this way of defining a local
> symbol in inline assembly and referring to it from C is supposed
On Sun, May 05, 2019 at 03:28:59AM +, S.j. Wang wrote:
> We find that maybe it is caused by the Transfer-Encoding format.
> We sent the patch by the --transfer-encoding=8bit, but in the receiver side
> it shows:
]
> Content-Type: text/plain; charset="utf-8"
> Content-Transfer-Encoding: base64
On 06/05/2019 09.05, Ingo Molnar wrote:
>
>
> It's sad to see such nice data footprint savings go the way of the dodo
> just because GCC 4.8 is buggy.
>
> The current compatibility cut-off is GCC 4.6:
>
> GNU C 4.6 gcc --version
>
> Do we know where the GCC bug
* Rasmus Villemoes wrote:
> On 09/04/2019 23.25, Rasmus Villemoes wrote:
>
> > While refreshing these patches, which were orignally just targeted at
> > x86-64, it occured to me that despite the implementation relying on
> > inline asm, there's nothing x86 specific about it, and indeed it seem
68 matches
Mail list logo