On 12/09/2013 09:26 PM, Sasha Levin wrote:
> On 12/09/2013 12:12 PM, Vlastimil Babka wrote:
>> On 12/09/2013 06:05 PM, Sasha Levin wrote:
>>> On 12/09/2013 04:34 AM, Vlastimil Babka wrote:
>>>> Hello, I will look at it, thanks.
>>>> Do you have specific reproduction instructions?
>>>
>>> Not really, the fuzzer hit it once and I've been unable to trigger it 
>>> again. Looking at
>>> the piece of code involved it might have had something to do with 
>>> hugetlbfs, so I'll crank
>>> up testing on that part.
>>
>> Thanks. Do you have trinity log and the .config file? I'm currently unable 
>> to even boot linux-next
>> with my config/setup due to a GPF.
>> Looking at code I wouldn't expect that it could encounter a tail page, 
>> without first encountering a
>> head page and skipping the whole huge page. At least in THP case, as TLB 
>> pages should be split when
>> a vma is split. As for hugetlbfs, it should be skipped for mlock/munlock 
>> operations completely. One
>> of these assumptions is probably failing here...
> 
> If it helps, I've added a dump_page() in case we hit a tail page there and 
> got:
> 
> [  980.172299] page:ffffea003e5e8040 count:0 mapcount:1 mapping:          
> (null) index:0
> x0
> [  980.173412] page flags: 0x2fffff80008000(tail)
> 
> I can also add anything else in there to get other debug output if you think 
> of something else useful.

Please try the following. Thanks in advance.

------8<------
diff --git a/mm/mlock.c b/mm/mlock.c
index d480cd6..c81b7c3 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -436,11 +436,14 @@ static unsigned long __munlock_pagevec_fill(struct 
pagevec *pvec,
 void munlock_vma_pages_range(struct vm_area_struct *vma,
                             unsigned long start, unsigned long end)
 {
+       unsigned long orig_start = start;
+       unsigned int page_increm = 0;
+
        vma->vm_flags &= ~VM_LOCKED;

        while (start < end) {
                struct page *page = NULL;
-               unsigned int page_mask, page_increm;
+               unsigned int page_mask;
                struct pagevec pvec;
                struct zone *zone;
                int zoneid;
@@ -457,6 +460,22 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
                                &page_mask);

                if (page && !IS_ERR(page)) {
+                       if (PageTail(page)) {
+                               struct page *first_page;
+                               dump_page(page);
+                               printk("start=%lu pfn=%lu orig_start=%lu "
+                                      "page_increm=%d "
+                                      "vm_start=%lu vm_end=%lu vm_flags=%lu\n",
+                                       start, page_to_pfn(page), orig_start,
+                                       page_increm,
+                                       vma->vm_start, vma->vm_end,
+                                       vma->vm_flags);
+                               first_page = page->first_page;
+                               printk("first_page pfn=%lu\n",
+                                               page_to_pfn(first_page));
+                               dump_page(first_page);
+                               VM_BUG_ON(true);
+                       }
                        if (PageTransHuge(page)) {
                                lock_page(page);
                                /*


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to