Any comments about this version patchset ?
:)
> -Original Message-
> From: Yangbo Lu [mailto:yangbo...@nxp.com]
> Sent: Wednesday, September 21, 2016 2:57 PM
> To: linux-...@vger.kernel.org; ulf.hans...@linaro.org; Scott Wood; Arnd
> Bergmann
> Cc: linuxppc-dev@lists.ozlabs.org; devicet.
On Sun, Sep 25, 2016 at 12:19PM -0500, Scott Wood wrote:
> -Original Message-
> From: Scott Wood [mailto:o...@buserror.net]
> Sent: Sunday, September 25, 2016 12:19 PM
> To: Qiang Zhao
> Cc: linuxppc-dev@lists.ozlabs.org; pku@gmail.com; X.B. Xie
> ; linux-ker...@vger.kernel.org
> Subj
Hi Ben,
> Hrm.. this should really be a runtime switch...
I wonder if anyone is still running POWER7 LE, perhaps we could drop it
entirely.
Anton
On Sun, 2016-09-25 at 22:35 +1000, Anton Blanchard wrote:
> From: Anton Blanchard
>
> POWER8 handles unaligned accesses in little endian mode, but commit
> 0b5e6661ac69 ("powerpc: Don't set HAVE_EFFICIENT_UNALIGNED_ACCESS on
> little endian builds") disabled it for all.
>
> The issue with unalig
To create a movable node, we need to hotplug all of its memory into
ZONE_MOVABLE.
Note that to do this, auto_online_blocks should be off. Since the memory
will first be added to the default zone, we must explicitly use
online_movable to online.
Because such a node contains no normal memory, can_o
In __fdt_scan_reserved_mem(), the availability of a node is determined
by testing its "status" property.
Move this check into its own function, borrowing logic from the
unflattened version, of_device_is_available().
Another caller will be added in a subsequent patch.
Signed-off-by: Reza Arbab
-
Remove the check which prevents us from hotplugging into an empty node.
This limitation has been questioned before [1], and judging by the
response, there doesn't seem to be a reason we can't remove it. No issues
have been found in light testing.
[1]
http://lkml.kernel.org/r/cagzkibrmksa1yyhbf5h
These changes enable the dynamic creation of movable nodes on power.
On x86, the ACPI SRAT memory affinity structure can mark memory
hotpluggable, allowing the kernel to possibly create movable nodes at
boot.
While power has no analog of this SRAT information, we can still create
a movable memory
Respect the standard dt "status" property when scanning memory nodes in
early_init_dt_scan_memory(), so that if the node is unavailable, no
memory will be added.
The use case at hand is accelerator or device memory, which may be
unusable until post-boot initialization of the memory link. Such a no
At boot, the movable_node option sets bottom-up memblock allocation.
This reduces the chance that, in the window before movable memory has
been identified, an allocation for the kernel might come from a movable
node. By going bottom-up, early allocations will most likely come from
the same node as
Local atomic operations are fast and highly reentrant per CPU counters.
Used for percpu variable updates. Local atomic operations only guarantee
variable modification atomicity wrt the CPU which owns the data and
these needs to be executed in a preemption safe way.
Here is the design of this patch
To support disabling and enabling of irq with PMI, set of
new powerpc_local_irq_pmu_save() and powerpc_local_irq_restore()
functions are added. And powerpc_local_irq_save() implemented,
by adding a new soft_enabled manipulation function soft_enabled_or_return().
Local_irq_pmu_* macros are provided
New Kconfig is added "CONFIG_IRQ_DEBUG_SUPPORT" to add warn_on
to alert the invalid transitions. Also moved the code under
the CONFIG_TRACE_IRQFLAGS in arch_local_irq_restore() to new Kconfig.
Signed-off-by: Madhavan Srinivasan
---
arch/powerpc/Kconfig | 4
arch/powerpc/kernel/irq.c |
To support masking of the PMI interrupts, couple of new interrupt handler
macros are added MASKABLE_EXCEPTION_PSERIES_OOL and
MASKABLE_RELON_EXCEPTION_PSERIES_OOL.
New bit mask field "IRQ_DISABLE_MASK_PMU" is introduced to support
the masking of PMI.
Couple of new irq #defs "PACA_IRQ_PMI" and "SO
To support addition of "bitmask" to MASKABLE_* macros,
factor out the EXCPETION_PROLOG_1 macro.
Currently soft_enabled is used as the flag to determine
the interrupt state. Patch extends the soft_enabled
to be used as a mask instead of a flag.
Make it explicit the interrupt masking supported
by a
Currently we use both EXCEPTION_PROLOG_1 and __EXCEPTION_PROLOG_1
in the MASKABLE_* macros. As a cleanup, this patch makes MASKABLE_*
to use only __EXCEPTION_PROLOG_1. There is not logic change.
Reviewed-by: Nicholas Piggin
Signed-off-by: Madhavan Srinivasan
---
arch/powerpc/include/asm/excepti
"paca->soft_enabled" is used as a flag to mask some of interrupts.
Currently supported flags values and their details:
soft_enabledMSR[EE]
0 0 Disabled (PMI and HMI not masked)
1 1 Enabled
"paca->soft_enabled" is initialized to 1 to make the interripts
Add new soft_enabled_* manipulation function and implement
arch_local_* using the soft_enabled_* wrappers.
Reviewed-by: Nicholas Piggin
Signed-off-by: Madhavan Srinivasan
---
arch/powerpc/include/asm/hw_irq.h | 32 ++--
1 file changed, 14 insertions(+), 18 deletions(
Force use of soft_enabled_set() wrapper to update paca-soft_enabled
wherever possisble. Also add a new wrapper function, soft_enabled_set_return(),
added to force the paca->soft_enabled updates.
Signed-off-by: Madhavan Srinivasan
---
arch/powerpc/include/asm/hw_irq.h | 14 ++
arch/p
Move set_soft_enabled() from powerpc/kernel/irq.c to
asm/hw_irq.c, to force updates to paca-soft_enabled
done via these access function. Add "memory" clobber
to hint compiler since paca->soft_enabled memory is the target
here
Renaming it as soft_enabled_set() will make
namespaces works better as p
Two #defs IRQ_DISABLE_LEVEL_NONE and IRQ_DISABLE_LEVEL_LINUX
are added to be used when updating paca->soft_enabled.
Replace the hardcoded values used when updating
paca->soft_enabled with IRQ_DISABLE_MASK_* #def.
No logic change.
Reviewed-by: Nicholas Piggin
Signed-off-by: Madhavan Srinivasan
--
Local atomic operations are fast and highly reentrant per CPU counters.
Used for percpu variable updates. Local atomic operations only guarantee
variable modification atomicity wrt the CPU which owns the data and
these needs to be executed in a preemption safe way.
Here is the design of the patchs
From: Anton Blanchard
We supported POWER7 CPUs for bootstrapping little endian, but the
target was always POWER8. Now that POWER7 specific issues are
impacting performance, change the default target to POWER8.
Signed-off-by: Anton Blanchard
---
arch/powerpc/platforms/Kconfig.cputype | 1 +
1 f
From: Anton Blanchard
POWER8 handles unaligned accesses in little endian mode, but commit
0b5e6661ac69 ("powerpc: Don't set HAVE_EFFICIENT_UNALIGNED_ACCESS on
little endian builds") disabled it for all.
The issue with unaligned little endian accesses is specific to POWER7,
so update the Kconfig
Hi! Here is my fifth regression report for Linux 4.8. It lists 15
regressions I'm aware of. 5 of them are new (for many of those
there are patches available to fix the regression); 3 mentioned
in last weeks report got fixed; 1 is going to be removed.
As always: Are you aware of any other regressi
Hi Nick,
> Hmm. If we execute this loop once, we'll only fetch additional nops.
> Twice, and we make up for them by not fetching unused instructions.
> More than twice and we may start winning.
>
> For large sizes it probably helps, but I'd like to see what sizes
> memset sees.
I noticed this in
On Tue, Aug 02, 2016 at 07:59:31PM +0800, Chenhui Zhao wrote:
> T104x has deep sleep feature, which can switch off most parts of
> the SoC when it is in deep sleep mode. This way, it becomes more
> energy-efficient.
>
> The DDR controller will also be powered off in deep sleep. Therefore,
> the la
From: Anton Blanchard
__kernel_get_syscall_map and __kernel_clock_getres use cmpli to
check if the passed in pointer is non zero. cmpli maps to a 32 bit
compare on binutils, so we ignore the top 32 bits.
A simple test case can be created by passing in a bogus pointer with
the bottom 32 bits clea
28 matches
Mail list logo