Current implementation calculates usemap_size in two steps:
    * calculate number of bytes to cover these bits
    * calculate number of "unsigned long" to cover these bytes

It would be more clear by:
    * calculate number of "unsigned long" to cover these bits
    * multiple it with sizeof(unsigned long)

This patch refine usemap_size() a little to make it more easy to
understand.

Signed-off-by: Wei Yang <richard.weiy...@gmail.com>
---
 mm/sparse.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/mm/sparse.c b/mm/sparse.c
index a0792526adfa..faa36ef9f9bd 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -249,10 +249,7 @@ static int __meminit sparse_init_one_section(struct 
mem_section *ms,
 
 unsigned long usemap_size(void)
 {
-       unsigned long size_bytes;
-       size_bytes = roundup(SECTION_BLOCKFLAGS_BITS, 8) / 8;
-       size_bytes = roundup(size_bytes, sizeof(unsigned long));
-       return size_bytes;
+       return BITS_TO_LONGS(SECTION_BLOCKFLAGS_BITS) * sizeof(unsigned long);
 }
 
 #ifdef CONFIG_MEMORY_HOTPLUG
-- 
2.11.0

Reply via email to