Trim the pointless temporary variable.
Signed-off-by: Reza Arbab
---
arch/powerpc/platforms/powernv/pci-ioda.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c
b/arch/powerpc/platforms/powernv/pci-ioda.c
index b78b5e81f941
code")
Signed-off-by: Reza Arbab
Cc: Christoph Hellwig
---
arch/powerpc/platforms/powernv/pci-ioda.c | 16 ++--
1 file changed, 10 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c
b/arch/powerpc/platforms/powernv/pci-ioda.c
index 8849218187d7..70
ly
falling back to 32 bits.
Fixes: 2d6ad41b2c21 ("powerpc/powernv: use the generic iommu bypass code")
Signed-off-by: Reza Arbab
Cc: Christoph Hellwig
---
arch/powerpc/kernel/dma-iommu.c | 19 +++
arch/powerpc/platforms/powernv/pci-ioda.c | 8 ++--
Change pnv_pci_ioda_iommu_bypass_supported() to have no side effects, by
separating the part of the function that determines if bypass is
supported from the part that actually attempts to configure it.
Move the latter to a controller-specific dma_set_mask() callback.
Signed-off-by: Reza Arbab
Collapse several open coded instances of pnv_ioda_get_pe().
Signed-off-by: Reza Arbab
---
arch/powerpc/platforms/powernv/npu-dma.c | 22 +-
arch/powerpc/platforms/powernv/pci-ioda.c | 10 +++---
2 files changed, 8 insertions(+), 24 deletions(-)
diff --git a/arch
Revert commit b4d37a7b6934 ("powerpc/powernv: Remove unused
pnv_npu_try_dma_set_bypass() function") so that this function can be
reintegrated.
Fixes: 2d6ad41b2c21 ("powerpc/powernv: use the generic iommu bypass code")
Signed-off-by: Reza Arbab
Cc: Christoph Hellwig
---
ar
Write this loop more compactly to improve readability.
Signed-off-by: Reza Arbab
---
arch/powerpc/platforms/powernv/npu-dma.c | 9 ++---
1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/arch/powerpc/platforms/powernv/npu-dma.c
b/arch/powerpc/platforms/powernv/npu-dma.c
index
Move this code to its own function for reuse. As a side benefit,
rearrange the comments and spread things out for readability.
Signed-off-by: Reza Arbab
---
arch/powerpc/platforms/powernv/pci-ioda.c | 37 +--
1 file changed, 25 insertions(+), 12 deletions(-)
diff
This little calculation will be needed in other places. Move it to a
convenience function.
Signed-off-by: Reza Arbab
---
arch/powerpc/platforms/powernv/pci-ioda.c | 8 +++-
arch/powerpc/platforms/powernv/pci.h | 8
2 files changed, 11 insertions(+), 5 deletions(-)
diff --git
Bring back the pci controller based hook in dma_set_mask(), as it will
have a user again.
This reverts commit 662acad4067a ("powerpc/pci: remove the dma_set_mask
pci_controller ops methods"). The callback signature has been adjusted
with void return to fit its caller.
Signed-off-by:
actor pnv_pci_ioda_iommu_bypass_supported(). It seems
wrong for a boolean *_supported() function to have side effects. They
reintroduce a pci controller based dma_set_mask() hook. If that's
undesirable, these last three patches can be dropped.
Reza Arbab (11):
Revert "powerpc/powernv: Remove unused pnv_npu_try
To enable simpler calling code, change this function to find the value
of bypass instead of taking it as an argument.
Signed-off-by: Reza Arbab
---
arch/powerpc/platforms/powernv/npu-dma.c | 12 +---
arch/powerpc/platforms/powernv/pci.h | 2 +-
2 files changed, 10 insertions(+), 4
pnv_npu_try_dma_set_bypass() is to then propagate the same bypass
configuration to all the NPU devices associated with that GPU.
--
Reza Arbab
On Wed, Oct 30, 2019 at 06:55:18PM +0100, Christoph Hellwig wrote:
On Wed, Oct 30, 2019 at 12:00:00PM -0500, Reza Arbab wrote:
Change pnv_pci_ioda_iommu_bypass_supported() to have no side effects, by
separating the part of the function that determines if bypass is
supported from the part that
On Wed, Oct 30, 2019 at 07:13:59PM +0100, Christoph Hellwig wrote:
On Wed, Oct 30, 2019 at 01:08:51PM -0500, Reza Arbab wrote:
On Wed, Oct 30, 2019 at 06:53:41PM +0100, Christoph Hellwig wrote:
How do you even use this code? Nothing in the kernel even calls
dma_set_mask for NPU devices, as we
Popple
[ar...@linux.ibm.com: Rebase, add commit log]
Signed-off-by: Reza Arbab
---
arch/powerpc/include/asm/opal-api.h| 4 ++-
arch/powerpc/include/asm/opal.h| 3 ++
arch/powerpc/include/asm/powernv.h | 12
arch/powerpc/platforms/powernv/npu-dma.c
__this_cpu_inc_return(mce_queue_count) - 1;
/* If queue is full, just return for now. */
if (index >= MAX_MC_EVT) {
--
2.13.6
--
Reza Arbab
Tested-by: Reza Arbab
--
Reza Arbab
or it. Ie,
if ((rc & NOTIFY_STOP_MASK) && (regs->msr & MSR_RI)) {
evt->disposition = MCE_DISPOSITION_RECOVERED;
--
Reza Arbab
n't see how memcpy_mcsafe() would be causing it. I tried changing it
to save/restore r13 where it already does r14-r22, but that didn't make
a difference. Any ideas?
--
Reza Arbab
On Tue, Jul 02, 2019 at 10:49:23AM +0530, Santosh Sivaraj wrote:
+static BLOCKING_NOTIFIER_HEAD(mce_notifier_list);
Mahesh suggested using an atomic notifier chain instead of blocking,
since we are in an interrupt.
--
Reza Arbab
e
have a couple of other potential users of the notifier from external
modules (so their callbacks would require virtual mode).
--
Reza Arbab
back to the drawing board for the others.
--
Reza Arbab
On Sat, Jul 06, 2019 at 07:56:39PM +1000, Nicholas Piggin wrote:
Santosh Sivaraj's on July 6, 2019 7:26 am:
From: Reza Arbab
Testing my memcpy_mcsafe() work in progress with an injected UE, I get
an error like this immediately after the function returns:
BUG: Unable to handle kernel
so it might be enough to have a
notifier in the irq work processing.
We can pick up this thread later, but if I remember correctly the
sticking point we ran into was that we never got that far. Instead of
returning from the MCE, we went down the fatal codepath.
--
Reza Arbab
")
98fe3633c5a4 ("x86/mm/hotplug: Fix BUG_ON() after hot-remove by not freeing
PUD")
Does their reasoning apply to powerpc as well?
--
Reza Arbab
o prevent any issues if these particulars ever change, add _PAGE_SAO to
the mask.
Suggested-by: Charles Johns
Signed-off-by: Reza Arbab
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.
is a little confusing to define
_PAGE_CACHE_CTL this way:
#define _PAGE_CACHE_CTL (_PAGE_NON_IDEMPOTENT | _PAGE_TOLERANT)
I like Alexey's idea, maybe just use a literal?
#define _PAGE_CACHE_CTL 0x30
--
Reza Arbab
eats it to the punch.
There may be a long-term way to fix this at a larger scale, but for now
resolve the immediate problem by gating our call to
test_and_set_bit_lock() with one to test_bit(), which is obviously
implemented without using a store.
Signed-off-by: Reza Arbab
---
arch/powerpc
twice. As above, can fix during commit, so no need for
a new patch.
+ assert(dt_find_by_name_before_addr(root, "node0_1") == addr2);
+ assert(dt_find_by_name_before_addr(root, "node0") == NULL);
+ assert(dt_find_by_name_before_addr(root, "node0_") == NULL);
+ dt_free(root);
+
return 0;
}
--
Reza Arbab
on
and testcase for the same in core/test/run-device.c
Series applied to skiboot master with the fixup we discussed.
--
Reza Arbab
due to hotplug/unplug
+* limitations.
+*/
+ compatible = of_get_flat_dt_prop(node, "compatible", NULL);
+ if (compatible && !strcmp(compatible,
"ibm,coherent-device-memory")) {
+ *block_size = SZ_256M;
+ return 1;
+ }
}
/* continue looking for other memory device types */
return 0;
--
Reza Arbab
On Sat, Jul 29, 2023 at 08:58:57PM +0530, Aneesh Kumar K V wrote:
Thanks for correcting the right device tree node and testing the
changes. Can I add
Co-authored-by: Reza Arbab
Sure, that's fine.
Signed-off-by: Reza Arbab
--
Reza Arbab
t_new_addr(root, "node", 0x1);
addr2 = dt_new_addr(root, "node", 0x2);
assert(dt_find_by_name_substr(root, "node") == ???);
^^^
--
Reza Arbab
s_p10[i]);
+ if (!target)
+ continue;
+ /* Remove the device node */
+ dt_free(target);
+ }
+ }
--
Reza Arbab
and make sure we can map them all
correctly using the computed memory block size value.
Reviewed-by: Reza Arbab
--
Reza Arbab
determined at boot is 1GB, but we
know that 15.75GB of device memory will be hotplugged during runtime.
Reviewed-by: Reza Arbab
--
Reza Arbab
ame fit better in previous versions of the patch, but
since you're specifically looking for '@' now, maybe call it something
like dt_find_by_name_before_addr?
--
Reza Arbab
rget = dt_find_by_name_substr(dev, otl);
+ if (target)
+ dt_free(target);
+ break;
As far as I know skiboot follows the kernel coding style. Would you mind
fixing up the minor style nits checkpatch.pl reports for this patch?
--
Reza Arbab
DD2.3 as a discrete case for the first time here, I'm
carrying the quirks of DD2.2 forward to keep all behavior outside of
this DAWR change the same. This leaves the assessment and potential
removal of those quirks on DD2.3 for later.
Signed-off-by: Reza Arbab
---
Documentation/powerpc/dawr-
does not free, and it wasn't quite clear if
their motivation also applies to us. Probably not, but I thought it was
worth mentioning again.
--
Reza Arbab
termios struct. It's the issue described here:
https://groups.google.com/forum/#!topic/golang-nuts/K5NoG8slez0
Things work if you replace syscall.TCGETS with 0x402c7413 and
syscall.TCSETS with 0x802c7414, the correct values on ppc64le.
--
Reza Arbab
On Wed, Mar 15, 2017 at 01:11:19PM -0500, Reza Arbab wrote:
https://groups.google.com/forum/#!topic/golang-nuts/K5NoG8slez0
Oops.
https://groups.google.com/d/msg/golang-nuts/K5NoG8slez0/mixUse17iaMJ
--
Reza Arbab
.kernel.org/r/1479253501-26261-1-git-send-email-bsinghar...@gmail.com
(see patch 3/3)
--
Reza Arbab
On Tue, May 23, 2017 at 03:05:08PM -0500, Michael Bringmann wrote:
On 05/23/2017 10:52 AM, Reza Arbab wrote:
On Tue, May 23, 2017 at 10:15:44AM -0500, Michael Bringmann wrote:
+static void setup_nodes(void)
+{
+int i, l = 32 /* MAX_NUMNODES */;
+
+for (i = 0; i < l; i++) {
+
On Tue, May 23, 2017 at 05:44:23PM -0500, Michael Bringmann wrote:
On 05/23/2017 04:49 PM, Reza Arbab wrote:
On Tue, May 23, 2017 at 03:05:08PM -0500, Michael Bringmann wrote:
On 05/23/2017 10:52 AM, Reza Arbab wrote:
On Tue, May 23, 2017 at 10:15:44AM -0500, Michael Bringmann wrote:
+static
-email-bsinghar...@gmail.com
--
Reza Arbab
On Fri, May 26, 2017 at 01:46:58PM +1000, Michael Ellerman wrote:
Reza Arbab writes:
On Thu, May 25, 2017 at 04:19:53PM +1000, Michael Ellerman wrote:
The commit message for 3af229f2071f says:
In practice, we never see a system with 256 NUMA nodes, and in fact, we
do not support node
if (nid < 0)
+ continue;
+
+ node_set(nid, node_possible_map);
+ }
+
for_each_online_node(nid) {
unsigned long start_pfn, end_pfn;
--
Reza Arbab
Change {create,remove}_section_mapping() to be wrappers around functions
prefixed with "hash__".
This is preparation for the addition of their "radix__" variants. No
functional change.
Signed-off-by: Reza Arbab
---
arch/powerpc/include/asm/book3s/64/hash.h | 5 ++
WARNs, but otherwise the code to remove linear
mappings is already sufficient for vmemmap.
Signed-off-by: Reza Arbab
---
arch/powerpc/mm/pgtable-radix.c | 23 ++-
1 file changed, 22 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm
This was defaulting to 4K, regardless of PAGE_SIZE.
Signed-off-by: Reza Arbab
---
arch/powerpc/mm/pgtable-radix.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c
index 623a0dc..54bd70e 100644
--- a/arch/powerpc/mm/pgtable
Tear down and free the four-level page tables of the linear mapping
during memory hotremove.
We borrow the basic structure of remove_pagetable() and friends from the
identically-named x86 functions.
Signed-off-by: Reza Arbab
---
arch/powerpc/include/asm/book3s/64/radix.h | 1 +
arch/powerpc
Add the linear page mapping function for radix, used by memory hotplug.
This is similar to vmemmap_populate().
Signed-off-by: Reza Arbab
---
arch/powerpc/include/asm/book3s/64/radix.h | 4
arch/powerpc/mm/pgtable-book3s64.c | 2 +-
arch/powerpc/mm/pgtable-radix.c| 19
sh
and Michael pointed out, they are tied to CONFIG_SPARSEMEM_VMEMMAP and only
did what I needed by luck anyway.
v1:
*
https://lkml.kernel.org/r/1466699962-22412-1-git-send-email-ar...@linux.vnet.ibm.com
Reza Arbab (5):
powerpc/mm: set the radix linear page mapping size
powerpc/mm: refactor
On Sat, Dec 17, 2016 at 01:38:40AM +1100, Balbir Singh wrote:
Do we care about alt maps yet?
Good question. I'll try to see if/how altmaps might need special
consideration here.
--
Reza Arbab
On Mon, Dec 19, 2016 at 02:30:28PM +0530, Aneesh Kumar K.V wrote:
Reza Arbab writes:
Change {create,remove}_section_mapping() to be wrappers around
functions prefixed with "hash__".
This is preparation for the addition of their "radix__" variants. No
functional change.
On Mon, Dec 19, 2016 at 02:34:13PM +0530, Aneesh Kumar K.V wrote:
Reza Arbab writes:
Add the linear page mapping function for radix, used by memory
hotplug. This is similar to vmemmap_populate().
Ok with this patch your first patch becomes useful. Can you merge that
with this and rename
On Mon, Dec 19, 2016 at 03:18:07PM +0530, Aneesh Kumar K.V wrote:
Reza Arbab writes:
+static void remove_pte_table(pte_t *pte_start, unsigned long addr,
+unsigned long end)
+{
+ unsigned long next;
+ pte_t *pte;
+
+ pte = pte_start + pte_index(addr
gtable() and reuse?
Yes, that's my plan for v4.
--
Reza Arbab
hash__ and
radix__ variants. Leave the radix versions stubbed for now.
Reviewed-by: Aneesh Kumar K.V
Acked-by: Balbir Singh
Signed-off-by: Reza Arbab
---
It was suggested that this fix be separated from the rest of the
set which implements the radix page mapping/unmapping.
arch/powerpc/include
Wire up memory hotplug page mapping for radix. Share the mapping
function already used by radix_init_pgtable().
Signed-off-by: Reza Arbab
---
arch/powerpc/include/asm/book3s/64/radix.h | 4
arch/powerpc/mm/pgtable-book3s64.c | 2 +-
arch/powerpc/mm/pgtable-radix.c| 7
must be offline to be removed, thus not in use. So there
shouldn't be the sort of concurrent page walking activity here that
might prompt us to use RCU.
Signed-off-by: Reza Arbab
---
arch/powerpc/include/asm/book3s/64/radix.h | 1 +
arch/powerpc/mm/pgtable-book3s64.c | 2 +-
arch/po
did what I needed by luck anyway.
v1:
*
https://lkml.kernel.org/r/1466699962-22412-1-git-send-email-ar...@linux.vnet.ibm.com
Reza Arbab (4):
powerpc/mm: refactor radix physical page mapping
powerpc/mm: add radix__create_section_mapping()
powerpc/mm: add radix__remove_section_mapping()
powerpc
ends of the range.
Signed-off-by: Reza Arbab
---
arch/powerpc/mm/pgtable-radix.c | 69 ++---
1 file changed, 31 insertions(+), 38 deletions(-)
diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c
index 623a0dc..5cee6d1 100644
--- a
WARNs, but otherwise the code to remove physical
mappings is already sufficient for vmemmap.
Signed-off-by: Reza Arbab
---
arch/powerpc/mm/pgtable-radix.c | 29 -
1 file changed, 28 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch
e a way to dump the range and the size with which we mapped that
range ?
Sure. It's a little more difficult than before, because the mapping size
is now reselected in each iteration of the loop, but a similar print can
be done.
--
Reza Arbab
On Wed, Jan 04, 2017 at 10:37:58AM +0530, Aneesh Kumar K.V wrote:
Reza Arbab writes:
+static void remove_pagetable(unsigned long start, unsigned long end)
+{
+ unsigned long addr, next;
+ pud_t *pud_base;
+ pgd_t *pgd;
+
+ spin_lock(&init_mm.page_table_
v1:
*
https://lkml.kernel.org/r/1466699962-22412-1-git-send-email-ar...@linux.vnet.ibm.com
Reza Arbab (4):
powerpc/mm: refactor radix physical page mapping
powerpc/mm: add radix__create_section_mapping()
powerpc/mm: add radix__remove_section_mapping()
powerpc/mm: unstub radix__vmemmap_remove_ma
ends of the range.
Signed-off-by: Reza Arbab
---
arch/powerpc/mm/pgtable-radix.c | 88 +++--
1 file changed, 50 insertions(+), 38 deletions(-)
diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c
index 623a0dc..2ce1354 100644
--- a
Wire up memory hotplug page mapping for radix. Share the mapping
function already used by radix_init_pgtable().
Signed-off-by: Reza Arbab
---
arch/powerpc/include/asm/book3s/64/radix.h | 4
arch/powerpc/mm/pgtable-book3s64.c | 2 +-
arch/powerpc/mm/pgtable-radix.c| 7
WARNs, but otherwise the code to remove physical
mappings is already sufficient for vmemmap.
Signed-off-by: Reza Arbab
---
arch/powerpc/mm/pgtable-radix.c | 29 -
1 file changed, 28 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch
igned-off-by: Reza Arbab
---
arch/powerpc/include/asm/book3s/64/radix.h | 1 +
arch/powerpc/mm/pgtable-book3s64.c | 2 +-
arch/powerpc/mm/pgtable-radix.c| 133 +
3 files changed, 135 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/includ
rnel.org/r/1483475991-16999-1-git-send-email-ar...@linux.vnet.ibm.com
(aka http://patchwork.ozlabs.org/patch/710629/)
--
Reza Arbab
Thanks for your review!
On Tue, Jan 17, 2017 at 12:16:35PM +0530, Balbir Singh wrote:
On Mon, Jan 16, 2017 at 01:07:43PM -0600, Reza Arbab wrote:
--- a/arch/powerpc/mm/pgtable-radix.c
+++ b/arch/powerpc/mm/pgtable-radix.c
@@ -107,54 +107,66 @@ int radix__map_kernel_page(unsigned long ea
01:07:45PM -0600, Reza Arbab wrote:
#ifdef CONFIG_MEMORY_HOTPLUG
+static void free_pte_table(pte_t *pte_start, pmd_t *pmd)
+{
+ pte_t *pte;
+ int i;
+
+ for (i = 0; i < PTRS_PER_PTE; i++) {
+ pte = pte_start + i;
+ if (!
On Tue, Jan 17, 2017 at 12:55:13PM +0530, Balbir Singh wrote:
On Mon, Jan 16, 2017 at 01:07:46PM -0600, Reza Arbab wrote:
Use remove_pagetable() and friends for radix vmemmap removal.
We do not require the special-case handling of vmemmap done in the x86
versions of these functions. This is
When setting a 2M pte, radix__map_kernel_page() is using the address
ptep = (pte_t *)pudp;
Fix this conversion to use pmdp instead. Use pmdp_ptep() to do this
instead of casting the pointer.
Signed-off-by: Reza Arbab
---
arch/powerpc/mm/pgtable-radix.c | 4 ++--
1 file changed, 2
rence, how are you ending up with
-Werror=maybe-uninitialized? On powerpc/next, with pseries_le_defconfig,
I get -Wno-maybe-uninitialized.
--
Reza Arbab
On Fri, Apr 06, 2018 at 03:24:23PM +1000, Balbir Singh wrote:
This patch adds support for flushing potentially dirty
cache lines when memory is hot-plugged/hot-un-plugged.
Acked-by: Reza Arbab
--
Reza Arbab
lly adding movable
nodes after boot.
1. http://events.linuxfoundation.org/sites/events/files/lcjp13_chen.pdf
2. commit 79442ed189ac ("mm/memblock.c: introduce bottom-up allocation
mode")
--
Reza Arbab
On Mon, Sep 19, 2016 at 09:53:49PM +1000, Balbir Singh wrote:
I presume you've tested with CONFIG_NODES_SHIFT of 8 (255 nodes?)
Oh yes, definitely.
The large number of possible nodes does not come into play here.
--
Reza Arbab
bool can_online_high_movable(struct zone *zone)
{
return node_state(zone_to_nid(zone), N_NORMAL_MEMORY);
}
#endif /* CONFIG_MOVABLE_NODE */
To be more clear, I can change the commit log to say "Onlining all of a
node's memory into ZONE_MOVABLE requires CONFIG_MOVABLE_NODE".
--
Reza Arbab
e from
normal to movable only if movable node is set. Also you may want to
mention that we still don't support the auto-online to movable.
Sure, no problem. I'll use a more verbose commit message in v3.
--
Reza Arbab
as the kernel image, which is necessarily in a nonmovable
node.
Then, once any known hotplug memory has been marked, allocation can be
reset back to top-down. On x86, this is done in numa_init(). This patch
does the same on power, in numa initmem_init().
Signed-off-by: Reza Arbab
---
arch/powerpc
is not plugged in, or switched off).
Once such memory is made operational, it can then be hotplugged.
Signed-off-by: Reza Arbab
---
drivers/of/fdt.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
index 9241c6e..59b772a 100644
--- a/drivers/o
n.
v1:
*
http://lkml.kernel.org/r/1470680843-28702-1-git-send-email-ar...@linux.vnet.ibm.com
Reza Arbab (5):
drivers/of: introduce of_fdt_is_available()
drivers/of: do not add memory for unavailable nodes
powerpc/mm: allow memory hotplug into a memoryless node
powerpc/mm: restore top-down allo
rnel.org/r/cagzkibrmksa1yyhbf5hwgxubcjse5smksmy4tpanerme2ug...@mail.gmail.com
http://lkml.kernel.org/r/20160511215051.gf22...@arbab-laptop.austin.ibm.com
Signed-off-by: Reza Arbab
Acked-by: Balbir Singh
Cc: Nathan Fontenot
Cc: Bharata B Rao
---
arch/powerpc/mm/numa.c | 13 +
1 file changed, 1
In __fdt_scan_reserved_mem(), the availability of a node is determined
by testing its "status" property.
Move this check into its own function, borrowing logic from the
unflattened version, of_device_is_available().
Another caller will be added in a subsequent patch.
Signed-off-by:
, can_online_high_movable()
will only allow us to do the onlining if CONFIG_MOVABLE_NODE is set.
Enable the use of this config option on PPC64 platforms.
Signed-off-by: Reza Arbab
---
Documentation/kernel-parameters.txt | 2 +-
mm/Kconfig | 2 +-
2 files changed, 2 insertions
on to movable node.
Sure, we can do it earlier. The only consideration is that any potential
calls to memblock_mark_hotplug() happen before we reset to top-down.
Since we don't do that at all on power, the call can go anywhere.
--
Reza Arbab
ion. That is
the missing call being added in the patch.
--
Reza Arbab
know until some PCI device
gets turned into CAPI mode and starts claiming LPC memory...
Yes, this is what is planned for, if I'm understanding you correctly.
In the dt, the PCI device node has a phandle pointing to the memory
node. The memory node describes the window into which we can hotplug at
runtime.
--
Reza Arbab
On Tue, Oct 04, 2016 at 11:48:30AM +1100, Balbir Singh wrote:
On 27/09/16 10:14, Reza Arbab wrote:
Right. To be clear, the background info I put in the commit log
refers to x86, where the SRAT can describe movable nodes which exist
at boot. They're trying to avoid allocations from those
bottom-up memblock allocation.
Since #ifdef CONFIG_MOVABLE_NODE will no longer be enough to restrict
this option to x86, move it to an arch-specific compilation unit
instead.
Signed-off-by: Reza Arbab
---
arch/x86/mm/numa.c | 35 ++-
mm/memory_hotp
means that a movable node
can be created by hotplugging all of its memory into ZONE_MOVABLE.
Fix the Kconfig definition of CONFIG_MOVABLE_NODE, which currently
recognizes (1), but not (2).
Signed-off-by: Reza Arbab
---
mm/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git
In __fdt_scan_reserved_mem(), the availability of a node is determined
by testing its "status" property.
Move this check into its own function, borrowing logic from the
unflattened version, of_device_is_available().
Another caller will be added in a subsequent patch.
Signed-off-by:
node. This set
no longer has any bearing on whether the pgdat is created at boot or
at the time of memory addition.
v1:
*
http://lkml.kernel.org/r/1470680843-28702-1-git-send-email-ar...@linux.vnet.ibm.com
Reza Arbab (5):
drivers/of: introduce of_fdt_device_is_available()
drivers/of
rnel.org/r/cagzkibrmksa1yyhbf5hwgxubcjse5smksmy4tpanerme2ug...@mail.gmail.com
http://lkml.kernel.org/r/20160511215051.gf22...@arbab-laptop.austin.ibm.com
Signed-off-by: Reza Arbab
Reviewed-by: Aneesh Kumar K.V
Acked-by: Balbir Singh
Cc: Nathan Fontenot
Cc: Bharata B Rao
---
arch/powerpc/mm/numa.c | 13 +-
is not plugged in, or switched off).
Once such memory is made operational, it can then be hotplugged.
Signed-off-by: Reza Arbab
---
drivers/of/fdt.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
index b138efb..08e5d94 100644
--- a/drivers/o
1 - 100 of 160 matches
Mail list logo