> > > Conceptually it would be cleaner, if expensive, to calculate the real > > memblock reserves if HASH_EARLY and ditch the dma_reserve, memory_reserve > > and nr_kernel_pages entirely. > > Why is it expensive? memblock tracks the totals for all memory and > reserved memory AFAIK, so it should just be a case of subtracting one > from the other?
Are you suggesting that we use something like memblock_phys_mem_size() but one which returns memblock.reserved.total_size ? Maybe a new function like memblock_reserved_mem_size()? > > > Unfortuantely, aside from the calculation, > > there is a potential cost due to a smaller hash table that affects everyone, > > not just ppc64. > > Yeah OK. We could make it an arch hook, or controlled by a CONFIG. If its based on memblock.reserved.total_size, then should it be arch specific? > > > However, if the hash table is meant to be sized on the > > number of available pages then it really should be based on that and not > > just a made-up number. > > Yeah that seems to make sense. > > The one complication I think is that we may have memory that's marked > reserved in memblock, but is later freed to the page allocator (eg. > initrd). Yes, this is a possibility, for example lets say we want fadump to continue to run instead of rebooting to a new kernel as it does today. > > I'm not sure if that's actually a concern in practice given the relative > size of the initrd and memory on most systems. But possibly there are > other things that get reserved and then freed which could skew the hash > table size calculation. > -- Thanks and Regards Srikar Dronamraju
