PMU counter support functions enforces event constraints for group of
events to check if all events in a group can be monitored. Incase of
event codes using PMC5 and PMC6 ( 500fa and 600f4 respectively ),
not all constraints are applicable, say the threshold or sample bits.
But current code include
On Sun, Sep 20 2020 at 10:42, Linus Torvalds wrote:
> On Sun, Sep 20, 2020 at 10:40 AM Thomas Gleixner wrote:
>>
>> I think the more obvious solution is to split the whole exercise:
>>
>> schedule()
>> prepare_switch()
>> unmap()
>>
>> switch_to()
>>
>> finish_switch()
>>
On Sat, Sep 19, 2020 at 01:06:34AM -0700, Kees Cook wrote:
> In preparation for performing actions during ptrace syscall exit, save
> the syscall number during ptrace syscall entry. Some architectures do
> no have the syscall number available during ptrace syscall exit.
>
> Suggested-by: Thadeu Li
On Sat, Sep 19, 2020 at 01:06:35AM -0700, Kees Cook wrote:
> In preparation for setting syscall nr and ret values separately, refactor
> the helpers to take a pointer to a value, so that a NULL can indicate
> "do not change this respective value". This is done to keep the regset
> read/write happen
On Sat, Sep 19, 2020 at 01:06:36AM -0700, Kees Cook wrote:
> Some archs (like powerpc) only support changing the return code during
> syscall exit when ptrace is used. Test entry vs exit phases for which
> portions of the syscall number and return values need to be set at which
> different phases.
On Sat, Sep 19, 2020 at 01:06:37AM -0700, Kees Cook wrote:
> As the UAPI headers start to appear in distros, we need to avoid outdated
> versions of struct clone_args to be able to test modern features;
> rename to "struct __clone_args". Additionally update the struct size
> macro names to match UA
Le 19/09/2020 à 20:10, Sasha Levin a écrit :
On Fri, Sep 18, 2020 at 08:35:06AM +0200, Frederic Barrat wrote:
Le 18/09/2020 à 03:57, Sasha Levin a écrit :
From: Frederic Barrat
[ Upstream commit 05dd7da76986937fb288b4213b1fa10dbe0d1b33 ]
This patch is not desirable for stable, for 5.4
From: Satheesh Rajendran
Add document entry for kvm_cma_resv_ratio kernel param which
is used to alter the KVM contiguous memory allocation percentage
for hash pagetable allocation used by hash mode PowerPC KVM guests.
Cc: linux-ker...@vger.kernel.org
Cc: kvm-...@vger.kernel.org
Cc: linuxppc-dev
Here are some optimizations and fixes to make CPU online/offline
faster and hence result in faster bootup.
Its based on top of my v5 coregroup support patchset.
https://lore.kernel.org/linuxppc-dev/20200810071834.92514-1-sri...@linux.vnet.ibm.com/t/#u
Anton reported that his 4096 cpu (1024 cores
On Power, cpu_core_mask and cpu_cpu_mask refer to the same set of CPUs.
cpu_cpu_mask is needed by scheduler, hence look at deprecating
cpu_core_mask. Before deleting the cpu_core_mask, ensure its only user
is moved to cpu_cpu_mask.
Cc: linuxppc-dev
Cc: LKML
Cc: Michael Ellerman
Cc: Nicholas Pig
Anton Blanchard reported that his 4096 vcpu KVM guest took around 30
minutes to boot. He also analyzed it to the time taken to iterate while
setting the cpu_core_mask.
Further analysis shows that cpu_core_mask and cpu_cpu_mask for any CPU
would be equal on Power. However updating cpu_core_mask too
Now that cpu_core_mask has been removed and topology_core_cpumask has
been updated to use cpu_cpu_mask, we no more need
get_physical_package_id.
Cc: linuxppc-dev
Cc: LKML
Cc: Michael Ellerman
Cc: Nicholas Piggin
Cc: Anton Blanchard
Cc: Oliver O'Halloran
Cc: Nathan Lynch
Cc: Michael Neuling
All the arch specific topology cpumasks are within a node/DIE.
However when setting these per CPU cpumasks, system traverses through
all the online CPUs. This is redundant.
Reduce the traversal to only CPUs that are online in the node to which
the CPU belongs to.
Cc: linuxppc-dev
Cc: LKML
Cc: M
update_mask_by_l2 is called only once. But it passes cpu_l2_cache_mask
as parameter. Instead of passing cpu_l2_cache_mask, use it directly in
update_mask_by_l2.
Cc: linuxppc-dev
Cc: LKML
Cc: Michael Ellerman
Cc: Nicholas Piggin
Cc: Anton Blanchard
Cc: Oliver O'Halloran
Cc: Nathan Lynch
Cc:
Currently on hotplug/hotunplug, CPU iterates through all the CPUs in
its core to find threads in its thread group. However this info is
already captured in cpu_l1_cache_map. Hence reduce iterations and
cleanup add_cpu_to_smallcore_masks function.
Cc: linuxppc-dev
Cc: LKML
Cc: Michael Ellerman
C
While offlining a CPU, system currently iterate through all the CPUs in
the DIE to clear sibling, l2_cache and smallcore maps. However if there
are more cores in a DIE, system can end up spending more time iterating
through CPUs which are completely unrelated.
Optimize this by only iterating throu
CACHE and COREGROUP domains are now part of default topology. However on
systems that don't support CACHE or COREGROUP, these domains will
eventually be degenerated. The degeneration happens per CPU. Do note the
current fixup_topology() logic ensures that mask of a domain that is not
supported on t
All threads of a SMT4 core can either be part of this CPU's l2-cache
mask or not related to this CPU l2-cache mask. Use this relation to
reduce the number of iterations needed to find all the CPUs that share
the same l2-cache.
Use a temporary mask to iterate through the CPUs that may share l2_cach
Move the logic for updating the coregroup mask of a CPU to its own
function. This will help in reworking the updation of coregroup mask in
subsequent patch.
Cc: linuxppc-dev
Cc: LKML
Cc: Michael Ellerman
Cc: Nicholas Piggin
Cc: Anton Blanchard
Cc: Oliver O'Halloran
Cc: Nathan Lynch
Cc: Mich
All threads of a SMT4/SMT8 core can either be part of CPU's coregroup
mask or outside the coregroup. Use this relation to reduce the
number of iterations needed to find all the CPUs that share the same
coregroup
Use a temporary mask to iterate through the CPUs that may share
coregroup mask. Also i
> On Sat, Sep 19, 2020 at 02:24:10PM +, David Laight wrote:
> > I thought about that change while writing my import_iovec() =>
> > iovec_import()
> > patch - and thought that the io_uring code would (as usual) cause grief.
> >
> > Christoph - did you see those patches?
Link to cover email.
h
Build the kernel with `C=2`:
arch/powerpc/kvm/book3s_hv_nested.c:572:25: warning: symbol
'kvmhv_alloc_nested' was not declared. Should it be static?
arch/powerpc/kvm/book3s_64_mmu_radix.c:350:6: warning: symbol
'kvmppc_radix_set_pte_at' was not declared. Should it be static?
arch/powerpc/kvm/book3s
On Fri, Aug 28, 2020 at 12:14:28PM +1000, Michael Ellerman wrote:
> Dmitry Safonov <0x7f454...@gmail.com> writes:
> > On Wed, 26 Aug 2020 at 15:39, Michael Ellerman wrote:
> >> Christophe Leroy writes:
> >> We added a test for vdso unmap recently because it happened to trigger a
> >> KAUP failure
Build kernel with `C=2`:
arch/powerpc/platforms/powernv/opal-core.c:74:16: warning: symbol
'mpipl_kobj' was not declared. Should it be static?
Signed-off-by: Wang Wensheng
---
arch/powerpc/platforms/powernv/opal-core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powe
Build kernel with `C=2`:
arch/powerpc/perf/isa207-common.c:24:18: warning: symbol
'isa207_pmu_format_attr' was not declared. Should it be static?
arch/powerpc/perf/power9-pmu.c:101:5: warning: symbol 'p9_dd21_bl_ev'
was not declared. Should it be static?
arch/powerpc/perf/power9-pmu.c:115:5: warnin
Build kernel with `C=2`:
arch/powerpc/kernel/security.c:253:6: warning: symbol 'stf_barrier' was
not declared. Should it be static?
Signed-off-by: Wang Wensheng
---
arch/powerpc/kernel/security.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/security.c b
Simplify the return expression.
Signed-off-by: Qinglang Miao
---
drivers/misc/ocxl/core.c | 7 +--
1 file changed, 1 insertion(+), 6 deletions(-)
diff --git a/drivers/misc/ocxl/core.c b/drivers/misc/ocxl/core.c
index b7a09b21a..aebfc53a2 100644
--- a/drivers/misc/ocxl/core.c
+++ b/drivers/m
Jing Xiangfeng writes:
> The variable ret is being initialized with '-ENOMEM' that is meaningless.
> So remove it.
>
> Signed-off-by: Jing Xiangfeng
Reviewed-by: Fabiano Rosas
> ---
> arch/powerpc/kvm/book3s_64_vio.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/ar
From: David Laight
This lets the compiler inline it into import_iovec() generating
much better code.
Signed-off-by: David Laight
[hch: drop the now pointless kerneldoc for a static function, and update
a few other comments]
Signed-off-by: Christoph Hellwig
---
fs/read_write.c| 1
Use in compat_syscall to import either native or the compat iovecs, and
remove the now superflous compat_import_iovec, which removes the need for
special compat logic in most callers. Only io_uring needs special
treatment given that it can call import_iovec from kernel threads acting
on behalf of
Explicitly check for the magic value insted of implicitly relying on
its numeric representation. Also drop the rather pointless unlikely
annotation.
Signed-off-by: Christoph Hellwig
---
lib/iov_iter.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/lib/iov_iter.c b/lib
From: David Laight
This is the only direct call of rw_copy_check_uvector(). Removing it
will allow rw_copy_check_uvector() to be inlined into import_iovec(),
while only paying a minor price by setting up an otherwise unused
iov_iter in the process_vm_readv/process_vm_writev syscalls that aren't
There is no compat_sys_readv64v2 syscall, only a compat_sys_preadv64v2
one.
Signed-off-by: Christoph Hellwig
---
include/linux/compat.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/compat.h b/include/linux/compat.h
index b354ce58966e2d..654c1ec36671a4 100644
Stop duplicating the iovec verify code, and instead add add a
__import_iovec helper that does the whole verify and import, but takes
a bool compat to decided on the native or compat layout. This also
ends up massively simplifying the calling conventions.
Signed-off-by: Christoph Hellwig
---
lib
Hi Al,
this series changes import_iovec to transparently deal with comat iovec
structures, and then cleanups up a lot of code dupliation.
Changes since v1:
- improve a commit message
- drop a pointless unlikely
- drop the PF_FORCE_COMPAT flag
- add a few more cleanups (including two from Davi
Now that import_iovec handles compat iovecs, the native readv and writev
syscalls can be used for the compat case as well.
Signed-off-by: Christoph Hellwig
---
arch/arm64/include/asm/unistd32.h | 4 ++--
arch/mips/kernel/syscalls/syscall_n32.tbl | 4 ++--
arch/mips/ke
Now that import_iovec handles compat iovecs, the native vmsplice syscall
can be used for the compat case as well.
Signed-off-by: Christoph Hellwig
---
arch/arm64/include/asm/unistd32.h | 2 +-
arch/mips/kernel/syscalls/syscall_n32.tbl | 2 +-
arch/mips/kernel/syscalls/syscall_o
Now that import_iovec handles compat iovecs as well, all the duplicated
code in the compat readv/writev helpers is not needed. Remove them
and switch the compat syscall handlers to use the native helpers.
Signed-off-by: Christoph Hellwig
---
fs/read_write.c | 179 ---
Now that import_iovec handles compat iovecs, the native version of
keyctl_instantiate_key_iov can be used for the compat case as well.
Signed-off-by: Christoph Hellwig
---
security/keys/compat.c | 36 ++--
security/keys/internal.h | 5 -
security/keys/keyct
Now that import_iovec handles compat iovecs, the native syscalls
can be used for the compat case as well.
Signed-off-by: Christoph Hellwig
---
arch/arm64/include/asm/unistd32.h | 4 +-
arch/mips/kernel/syscalls/syscall_n32.tbl | 4 +-
arch/mips/kernel/syscalls/syscall_o32.tbl
On Mon, Sep 21, 2020 at 04:34:25PM +0200, Christoph Hellwig wrote:
> {
> - WARN_ON(direction & ~(READ | WRITE));
> + WARN_ON(direction & ~(READ | WRITE | CHECK_IOVEC_ONLY));
This is now a no-op because:
include/linux/fs.h:#define CHECK_IOVEC_ONLY -1
I'd suggest we renumber it to 2?
(RE
On Mon, Sep 21, 2020 at 04:34:25PM +0200, Christoph Hellwig wrote:
> From: David Laight
>
> This is the only direct call of rw_copy_check_uvector(). Removing it
> will allow rw_copy_check_uvector() to be inlined into import_iovec(),
> while only paying a minor price by setting up an otherwise un
From: Christoph Hellwig
> Sent: 21 September 2020 15:34
>
> Explicitly check for the magic value insted of implicitly relying on
> its numeric representation. Also drop the rather pointless unlikely
> annotation.
>
> Signed-off-by: Christoph Hellwig
> ---
> lib/iov_iter.c | 5 ++---
> 1 file
On Mon, Sep 21, 2020 at 04:34:27PM +0200, Christoph Hellwig wrote:
> Explicitly check for the magic value insted of implicitly relying on
> its numeric representation. Also drop the rather pointless unlikely
> annotation.
See above - I would rather have CHECK_IOVEC_ONLY gone.
The reason for doi
On Mon, Sep 21, 2020 at 03:05:32PM +, David Laight wrote:
> I've actually no idea:
> 1) Why there is an access_ok() check here.
>It will be repeated by the user copy functions.
Early sanity check.
> 2) Why it isn't done when called from mm/process_vm_access.c.
>Ok, the addresses refe
On Mon, Sep 21, 2020 at 04:34:28PM +0200, Christoph Hellwig wrote:
> +static int compat_copy_iovecs_from_user(struct iovec *iov,
> + const struct iovec __user *uvector, unsigned long nr_segs)
> +{
> + const struct compat_iovec __user *uiov =
> + (const struct compat_iove
On Mon, Sep 21, 2020 at 04:34:29PM +0200, Christoph Hellwig wrote:
> Use in compat_syscall to import either native or the compat iovecs, and
> remove the now superflous compat_import_iovec, which removes the need for
> special compat logic in most callers. Only io_uring needs special
> treatment g
From: Al Viro
> Sent: 21 September 2020 16:02
>
> On Mon, Sep 21, 2020 at 04:34:25PM +0200, Christoph Hellwig wrote:
> > From: David Laight
> >
> > This is the only direct call of rw_copy_check_uvector(). Removing it
> > will allow rw_copy_check_uvector() to be inlined into import_iovec(),
> > w
From: Al Viro
> Sent: 21 September 2020 16:11
> On Mon, Sep 21, 2020 at 03:05:32PM +, David Laight wrote:
>
> > I've actually no idea:
> > 1) Why there is an access_ok() check here.
> >It will be repeated by the user copy functions.
>
> Early sanity check.
>
> > 2) Why it isn't done when
On Mon, Sep 21, 2020 at 03:21:35PM +, David Laight wrote:
> You really don't want to be looping through the array twice.
Profiles, please.
> I think the 'length' check can be optimised to do something like:
> for (...) {
> ssize_t len = (ssize_t)iov[seg].iov_len;
>
Le 21/09/2020 à 15:10, Qinglang Miao a écrit :
Simplify the return expression.
Signed-off-by: Qinglang Miao
---
Thanks!
Acked-by: Frederic Barrat
drivers/misc/ocxl/core.c | 7 +--
1 file changed, 1 insertion(+), 6 deletions(-)
diff --git a/drivers/misc/ocxl/core.c b/drivers/mi
From: Al Viro
> Sent: 21 September 2020 16:30
>
> On Mon, Sep 21, 2020 at 03:21:35PM +, David Laight wrote:
>
> > You really don't want to be looping through the array twice.
>
> Profiles, please.
I did some profiling of send() v sendmsg() much earlier in the year.
I can't remember the exac
On 9/21/20 2:02 AM, sathn...@linux.vnet.ibm.com wrote:
> From: Satheesh Rajendran
>
> Add document entry for kvm_cma_resv_ratio kernel param which
> is used to alter the KVM contiguous memory allocation percentage
> for hash pagetable allocation used by hash mode PowerPC KVM guests.
>
> Cc: linu
On Mon, Sep 21, 2020 at 04:29:37PM +0100, Al Viro wrote:
> On Mon, Sep 21, 2020 at 03:21:35PM +, David Laight wrote:
>
> > You really don't want to be looping through the array twice.
>
> Profiles, please.
Given that the iov array should be cache hot I'd be surprised to
see a huge difference
On 20/09/2020 01:22, Andy Lutomirski wrote:
>
>> On Sep 19, 2020, at 2:16 PM, Arnd Bergmann wrote:
>>
>> On Sat, Sep 19, 2020 at 6:21 PM Andy Lutomirski wrote:
On Fri, Sep 18, 2020 at 8:16 AM Christoph Hellwig wrote:
On Fri, Sep 18, 2020 at 02:58:22PM +0100, Al Viro wrote:
> Said
On 21/09/2020 19:10, Pavel Begunkov wrote:
> On 20/09/2020 01:22, Andy Lutomirski wrote:
>>
>>> On Sep 19, 2020, at 2:16 PM, Arnd Bergmann wrote:
>>>
>>> On Sat, Sep 19, 2020 at 6:21 PM Andy Lutomirski wrote:
> On Fri, Sep 18, 2020 at 8:16 AM Christoph Hellwig wrote:
> On Fri, Sep 18, 2
On 20/09/2020 18:55, William Kucharski wrote:
> I really like that as it’s self-documenting and anyone debugging it can see
> what is actually being used at a glance.
Also creates special cases for things that few people care about,
and makes it a pain for cross-platform (cross-bitness) developme
On Mon, Sep 21, 2020 at 03:44:00PM +, David Laight wrote:
> From: Al Viro
> > Sent: 21 September 2020 16:30
> >
> > On Mon, Sep 21, 2020 at 03:21:35PM +, David Laight wrote:
> >
> > > You really don't want to be looping through the array twice.
> >
> > Profiles, please.
>
> I did some p
On 20/09/2020 22:22, Matthew Wilcox wrote:
> On Sun, Sep 20, 2020 at 08:10:31PM +0100, Al Viro wrote:
>> IMO it's much saner to mark those and refuse to touch them from io_uring...
>
> Simpler solution is to remove io_uring from the 32-bit syscall list.
> If you're a 32-bit process, you don't get
On Mon, Sep 21, 2020 at 12:39 AM Thomas Gleixner wrote:
>
> If a task is migrated to a different CPU then the mapping address will
> change which will explode in colourful ways.
Heh.
Right you are.
Maybe we really *could* call this new kmap functionality something
like "kmap_percpu()" (or maybe
On 21/09/2020 00:13, David Laight wrote:
> From: Arnd Bergmann
>> Sent: 20 September 2020 21:49
>>
>> On Sun, Sep 20, 2020 at 9:28 PM Andy Lutomirski wrote:
>>> On Sun, Sep 20, 2020 at 12:23 PM Matthew Wilcox wrote:
On Sun, Sep 20, 2020 at 08:10:31PM +0100, Al Viro wrote:
> IMO it's
On 9/19/20 3:29 AM, Qinglang Miao wrote:
> Use DEFINE_SHOW_ATTRIBUTE macro to simplify the code.
>
> Signed-off-by: Qinglang Miao
Reviewed-by: Cédric Le Goater
> ---
> v2: based on linux-next(20200917), and can be applied to
> mainline cleanly now.
>
> arch/powerpc/kvm/book3s_xive_native
On Mon, Sep 21, 2020 at 11:35 AM Nick Desaulniers
wrote:
>
> Hello DAX maintainers,
> I noticed our PPC64LE builds failing last night:
> https://travis-ci.com/github/ClangBuiltLinux/continuous-integration/jobs/388047043
> https://travis-ci.com/github/ClangBuiltLinux/continuous-integration/jobs/388
From: Viorel Suman
DAI driver for new XCVR IP found in i.MX8MP.
Viorel Suman (2):
ASoC: fsl_xcvr: Add XCVR ASoC CPU DAI driver
ASoC: dt-bindings: fsl_xcvr: Add document for XCVR
Changes since v1:
- improved 6- and 12-ch layout comment
- used regmap polling function, improved
clocks han
From: Viorel Suman
XCVR (Audio Transceiver) is a on-chip functional module found
on i.MX8MP. It support HDMI2.1 eARC, HDMI1.4 ARC and SPDIF.
Signed-off-by: Viorel Suman
---
sound/soc/fsl/Kconfig| 10 +
sound/soc/fsl/Makefile |2 +
sound/soc/fsl/fsl_xcvr.c | 1343 +++
From: Viorel Suman
XCVR (Audio Transceiver) is a new IP module found on i.MX8MP.
Signed-off-by: Viorel Suman
---
.../devicetree/bindings/sound/fsl,xcvr.yaml| 103 +
1 file changed, 103 insertions(+)
create mode 100644 Documentation/devicetree/bindings/sound/fsl,xcv
On Mon, Sep 21 2020 at 09:24, Linus Torvalds wrote:
> On Mon, Sep 21, 2020 at 12:39 AM Thomas Gleixner wrote:
>>
>> If a task is migrated to a different CPU then the mapping address will
>> change which will explode in colourful ways.
>
> Right you are.
>
> Maybe we really *could* call this new km
On Sat, Sep 19, 2020 at 06:39:06PM +0100, Matthew Wilcox wrote:
> On Sat, Sep 19, 2020 at 10:18:54AM -0700, Linus Torvalds wrote:
> > On Sat, Sep 19, 2020 at 2:50 AM Thomas Gleixner wrote:
> > >
> > > this provides a preemptible variant of kmap_atomic & related
> > > interfaces. This is achieved b
On Fri, 4 Sep 2020 12:28:12 +1200, Chris Packham wrote:
> The SPIE register contains counts for the TX FIFO so any time the irq
> handler was invoked we would attempt to process the RX/TX fifos. Use the
> SPIM value to mask the events so that we only process interrupts that
> were expected.
>
> Th
> -Original Message-
> From: Ran Wang
> Sent: Wednesday, September 16, 2020 3:19 AM
> To: Leo Li ; Rob Herring ;
> Shawn Guo
> Cc: linuxppc-dev@lists.ozlabs.org; linux-arm-ker...@lists.infradead.org;
> devicet...@vger.kernel.org; linux-ker...@vger.kernel.org; Ran Wang
>
> Subject: [PA
> -Original Message-
> From: Ran Wang
> Sent: Wednesday, September 16, 2020 3:18 AM
> To: Leo Li ; Rob Herring ;
> Shawn Guo
> Cc: linuxppc-dev@lists.ozlabs.org; linux-arm-ker...@lists.infradead.org;
> devicet...@vger.kernel.org; linux-ker...@vger.kernel.org; Biwen Li
> ; Ran Wang
> S
> -Original Message-
> From: Ran Wang
> Sent: Wednesday, September 16, 2020 3:18 AM
> To: Leo Li ; Rob Herring ;
> Shawn Guo
> Cc: linuxppc-dev@lists.ozlabs.org; linux-arm-ker...@lists.infradead.org;
> devicet...@vger.kernel.org; linux-ker...@vger.kernel.org; Biwen Li
> ; Ran Wang
> S
> -Original Message-
> From: Ran Wang
> Sent: Wednesday, September 16, 2020 3:18 AM
> To: Leo Li ; Rob Herring ;
> Shawn Guo
> Cc: linuxppc-dev@lists.ozlabs.org; linux-arm-ker...@lists.infradead.org;
> devicet...@vger.kernel.org; linux-ker...@vger.kernel.org; Biwen Li
> ; Ran Wang
> S
> -Original Message-
> From: Ran Wang
> Sent: Wednesday, September 16, 2020 3:19 AM
> To: Leo Li ; Rob Herring ;
> Shawn Guo
> Cc: linuxppc-dev@lists.ozlabs.org; linux-arm-ker...@lists.infradead.org;
> devicet...@vger.kernel.org; linux-ker...@vger.kernel.org; Ran Wang
>
> Subject: [PA
On Mon, Sep 21, 2020 at 02:32:20PM +0530, sathn...@linux.vnet.ibm.com wrote:
> From: Satheesh Rajendran
>
> Add document entry for kvm_cma_resv_ratio kernel param which
> is used to alter the KVM contiguous memory allocation percentage
> for hash pagetable allocation used by hash mode PowerPC KVM
On Mon, Sep 21, 2020 at 11:47 AM Dan Williams wrote:
>
> On Mon, Sep 21, 2020 at 11:35 AM Nick Desaulniers
> wrote:
> >
> > Hello DAX maintainers,
> > I noticed our PPC64LE builds failing last night:
> > https://travis-ci.com/github/ClangBuiltLinux/continuous-integration/jobs/388047043
> > https:
On Mon, Sep 21, 2020 at 9:15 AM Pavel Begunkov wrote:
>
> On 21/09/2020 19:10, Pavel Begunkov wrote:
> > On 20/09/2020 01:22, Andy Lutomirski wrote:
> >>
> >>> On Sep 19, 2020, at 2:16 PM, Arnd Bergmann wrote:
> >>>
> >>> On Sat, Sep 19, 2020 at 6:21 PM Andy Lutomirski wrote:
> > On Fri, Se
On 22/09/2020 02:51, Andy Lutomirski wrote:
> On Mon, Sep 21, 2020 at 9:15 AM Pavel Begunkov wrote:
>>
>> On 21/09/2020 19:10, Pavel Begunkov wrote:
>>> On 20/09/2020 01:22, Andy Lutomirski wrote:
> On Sep 19, 2020, at 2:16 PM, Arnd Bergmann wrote:
>
> On Sat, Sep 19, 2020 at
On Mon, Sep 21, 2020 at 5:24 PM Pavel Begunkov wrote:
>
>
>
> On 22/09/2020 02:51, Andy Lutomirski wrote:
> > On Mon, Sep 21, 2020 at 9:15 AM Pavel Begunkov
> > wrote:
> >>
> >> On 21/09/2020 19:10, Pavel Begunkov wrote:
> >>> On 20/09/2020 01:22, Andy Lutomirski wrote:
>
> > On Sep 19,
Hi Leo, Rob,
On Tuesday, September 22, 2020 6:20 AM, Leo Li wrote:
>
> > -Original Message-
> > From: Ran Wang
> > Sent: Wednesday, September 16, 2020 3:18 AM
> > To: Leo Li ; Rob Herring ;
> > Shawn Guo
> > Cc: linuxppc-dev@lists.ozlabs.org;
> > linux-arm-ker...@lists.infradead.org;
>
Hi Leo
Tuesday, September 22, 2020 6:43 AM, Leo Li wrote:
>
>
> > -Original Message-
> > From: Ran Wang
> > Sent: Wednesday, September 16, 2020 3:18 AM
> > To: Leo Li ; Rob Herring ;
> > Shawn Guo
> > Cc: linuxppc-dev@lists.ozlabs.org;
> > linux-arm-ker...@lists.infradead.org;
> > devi
Hi Leo,
On Tuesday, September 22, 2020 6:59 AM, Leo Li wrote:
>
> > -Original Message-
> > From: Ran Wang
> > Sent: Wednesday, September 16, 2020 3:18 AM
> > To: Leo Li ; Rob Herring ;
> > Shawn Guo
> > Cc: linuxppc-dev@lists.ozlabs.org;
> > linux-arm-ker...@lists.infradead.org;
> > dev
On 15/09/2020 16:50, Christoph Hellwig wrote:
> On Wed, Sep 09, 2020 at 07:36:04PM +1000, Alexey Kardashevskiy wrote:
>> I want dma_get_required_mask() to return the bigger mask always.
>>
>> Now it depends on (in dma_alloc_direct()):
>> 1. dev->dma_ops_bypass: set via pci_set_(coherent_)dma_mas
On Fri, Sep 18, 2020 at 5:21 PM Michael Ellerman wrote:
>
> Hi Jordan,
>
> Jordan Niethe writes:
> > Currently in generic_secondary_smp_init(), cur_cpu_spec->cpu_restore()
> > is called before a stack has been set up in r1. This was previously fine
> > as the cpu_restore() functions were implemen
On Fri, 11 Sep 2020 16:28:26 -0500, Brian King wrote:
> When a canister on a FS9100, or similar storage, running in NPIV mode,
> is rebooted, its WWPNs will fail over to another canister. When this
> occurs, we see a WWPN going away from the fabric at one N-Port ID,
> and, a short time later, the
Currently in generic_secondary_smp_init(), cur_cpu_spec->cpu_restore()
is called before a stack has been set up in r1. This was previously fine
as the cpu_restore() functions were implemented in assembly and did not
use a stack. However commit 5a61ef74f269 ("powerpc/64s: Support new
device tree bin
The only thing keeping the cpu_setup() and cpu_restore() functions used
in the cputable entries for Power7, Power8, Power9 and Power10 in
assembly was cpu_restore() being called before there was a stack in
generic_secondary_smp_init(). Commit ("powerpc/64: Set up a kernel stack
for secondaries befo
Le 22/09/2020 à 07:53, Jordan Niethe a écrit :
Currently in generic_secondary_smp_init(), cur_cpu_spec->cpu_restore()
is called before a stack has been set up in r1. This was previously fine
as the cpu_restore() functions were implemented in assembly and did not
use a stack. However commit 5a6
On Tue, Sep 22, 2020 at 3:59 PM Christophe Leroy
wrote:
>
>
>
> Le 22/09/2020 à 07:53, Jordan Niethe a écrit :
> > Currently in generic_secondary_smp_init(), cur_cpu_spec->cpu_restore()
> > is called before a stack has been set up in r1. This was previously fine
> > as the cpu_restore() functions
On 22/09/2020 03:58, Andy Lutomirski wrote:
> On Mon, Sep 21, 2020 at 5:24 PM Pavel Begunkov wrote:
>>> Ah, so reading /dev/input/event* would suffer from the same issue,
>>> and that one would in fact be broken by your patch in the hypothetical
>>> case that someone tried to use io_ur
On 9/21/20 12:40 PM, Athira Rajeev wrote:
PMU counter support functions enforces event constraints for group of
events to check if all events in a group can be monitored. Incase of
event codes using PMC5 and PMC6 ( 500fa and 600f4 respectively ),
not all constraints are applicable, say the thr
91 matches
Mail list logo