Anshuman Khandual <khand...@linux.vnet.ibm.com> writes: > Though migrating gigantic HugeTLB pages does not sound much like real > world use case, they can be affected by memory errors. Hence migration > at the PGD level HugeTLB pages should be supported just to enable soft > and hard offline use cases.
In that case do we want to isolated the entire 16GB range ? Should we just dequeue the page from hugepage pool convert them to regular 64K pages and then isolate the 64K that had memory error ? > > While allocating the new gigantic HugeTLB page, it should not matter > whether new page comes from the same node or not. There would be very > few gigantic pages on the system afterall, we should not be bothered > about node locality when trying to save a big page from crashing. > > This introduces a new HugeTLB allocator called alloc_gigantic_page() > which will scan over all online nodes on the system and allocate a > single HugeTLB page. > -aneesh