On 2015/4/3 17:14, Thomas Monjalon wrote: > 2015-04-03 10:04, Gonzalez Monroy, Sergio: >> On 02/04/2015 14:41, Jay Rolette wrote: >>> On Thu, Apr 2, 2015 at 7:55 AM, Thomas Monjalon <thomas.monjalon at >>> 6wind.com> >>> wrote: >>> >>>> 2015-04-02 19:30, jerry.lilijun at huawei.com: >>>>> From: Lilijun <jerry.lilijun at huawei.com> >>>>> >>>>> In the function map_all_hugepages(), hugepage memory is truly allocated >>>> by >>>>> memset(virtaddr, 0, hugepage_sz). Then it costs about 40s to finish the >>>>> dpdk memory initialization when 40000 2M hugepages are setup in host os. >>>> Yes it's something we should try to reduce. >>>> >>> I have a patch in my tree that does the same opto, but it is commented out >>> right now. In our case, 2/3's of the startup time for our entire app was >>> due to that particular call - memset(virtaddr, 0, hugepage_sz). Just >>> zeroing 1 byte per huge page reduces that by 30% in my tests. >>> >>> The only reason I have it commented out is that I didn't have time to make >>> sure there weren't side-effects for DPDK or my app. For normal shared >>> memory on Linux, pages are initialized to zero automatically once they are >>> touched, so the memset isn't required but I wasn't sure whether that >>> applied to huge pages. Also wasn't sure how hugetlbfs factored into the >>> equation. >>> >>> Hopefully someone can chime in on that. Would love to uncomment the opto :) >>> >> I think the opto/patch is good ;) >> >> I had a look at the Linux kernel sources (mm/hugetlb.c)and at least >> since 2.6.32 (minimum >> Linux kernel version supported by DPDK) the kernel clears the hugepage >> (clear_huge_page) >> when it faults (hugetlb_no_page). >> >> Primary DPDK apps do clear_hugedir, clearing previously allocated >> hugepages, thus triggering >> hugepage faults (hugetlb_no_page) during map_all_hugepages. >> >> Note that even when we exit a primary DPDK app, hugepages remain >> allocated, reason why >> apps such as dump_cfg are able to retrieve config/memory information. > > OK, thanks Sergio. > > So the patch should add a comment to explain page fault reason of memset and > why 1 byte is enough. > I think we should also consider remap_all_hugepages() function.
Thanks very much. I will update the comments and send it again. > >>>> Isn't it a security hole? >>>> >>> Not necessarily. If the kernel pre-zeros the huge pages via CoW like normal >>> pages, then definitely not. >>> >>> Even if the kernel doesn't pre-zero the pages, if DPDK takes care of >>> properly initializing memory structures on startup as they are carved out >>> of the huge pages, then it isn't a security hole. However, that approach is >>> susceptible to bit rot... You can audit the code and make sure everything >>> is kosher at first, but you have to worry about new code making assumptions >>> about how memory is initialized. >>> >>>> This article speaks about "prezeroing optimizations" in Linux kernel: >>>> http://landley.net/writing/memory-faq.txt >>> >>> I read through that when I was trying to figure out what whether huge pages >>> were pre-zeroed or not. It doesn't talk about huge pages much beyond why >>> they are useful for reducing TLB swaps. >>> >>> Jay > > > > . >