On 22/07/2015 11:40, Thomas Monjalon wrote: > Sergio, > > As the maintainer of memory allocation, would you consider using > libhugetlbfs in DPDK for Linux? > It may simplify a part of our memory allocator and avoid some potential > bugs which would be already fixed in the dedicated lib. I did have a look at it a couple of months ago and I thought there were a few issues: - get_hugepage_region/get_huge_pages only allocates default size huge pages (you can set a different default huge page size with environment variables but no support for multiple sizes) plus we have no guarantee on physically contiguous pages. - That leave us with hugetlbfs_unlinked_fd/hugetlbfs_unlinked_fd_for_size. These APIs wouldn't simplify a lot the current code, just the allocation of the pages themselves (ie. creating a file in hugetlbfs mount). Then there is the issue with multi-process; because they return a file descriptor while unlinking the file, we would need some sort of Inter-Process Communication to pass the descriptors to secondary processes. - Not a big deal but AFAIK it is not possible to have multiple mount points for the same hugepage size, and even if you do, hugetlbfs_find_path_for_size returns always the same path (ie. first found in list). - We still need to parse /proc/self/pagemap to get physical address of mapped hugepages.
I guess that if we were to push for a new API such as hugetlbfs_fd_for_size, we could use it for the hugepage allocation, but we still would have to parse /proc/self/pagemap to get physical address and then order those hugepages. Thoughts? Sergio