On 31/01/2017 5:11 PM, Michael Ellerman wrote:
Rui Teng <rui.t...@linux.vnet.ibm.com> writes:
The offset of hugepage block will not be 16G, if the expected
page is more than one. Calculate the totol size instead of the
hardcode value.
I assume you found this by code inspection and not by triggering an
actual bug?
Yes, I found this problem only by code inspection. We were finding the
ways to enable 16G huge page besides changing the device tree. For
example, provide a new interface to set these size and pages parameters.
So that I think it may cause problem here.
cheers
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index 8033493..b829f8e 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -506,7 +506,7 @@ static int __init htab_dt_scan_hugepage_blocks(unsigned
long node,
printk(KERN_INFO "Huge page(16GB) memory: "
"addr = 0x%lX size = 0x%lX pages = %d\n",
phys_addr, block_size, expected_pages);
- if (phys_addr + (16 * GB) <= memblock_end_of_DRAM()) {
+ if (phys_addr + block_size * expected_pages <= memblock_end_of_DRAM()) {
memblock_reserve(phys_addr, block_size * expected_pages);
add_gpage(phys_addr, block_size, expected_pages);
}
--
2.9.0