On 12/26/2015 at 11:21 PM, Minfei Huang wrote:
> On 12/23/15 at 07:12pm, Xunlei Pang wrote:
>> Implement the protection method for the crash kernel memory
>> reservation for the 64-bit x86 kdump.
>>
>> Signed-off-by: Xunlei Pang <xlp...@redhat.com>
>> ---
>> Only provided x86_64 implementation, as I've only tested on x86_64 so far.
>>
>>  arch/x86/kernel/machine_kexec_64.c | 43 
>> ++++++++++++++++++++++++++++++++++++++
>>  1 file changed, 43 insertions(+)
>>
>> diff --git a/arch/x86/kernel/machine_kexec_64.c 
>> b/arch/x86/kernel/machine_kexec_64.c
>> index 819ab3f..a3d289c 100644
>> --- a/arch/x86/kernel/machine_kexec_64.c
>> +++ b/arch/x86/kernel/machine_kexec_64.c
>> @@ -536,3 +536,46 @@ overflow:
>>      return -ENOEXEC;
>>  }
>>  #endif /* CONFIG_KEXEC_FILE */
>> +
>> +#ifdef CONFIG_KEXEC_CORE
>> +static int
>> +kexec_mark_range(unsigned long start, unsigned long end, bool protect)
>> +{
>> +    struct page *page;
>> +    unsigned int nr_pages;
>> +
>> +    if (!start || !end || start >= end)
>> +            return 0;
>> +
>> +    page = pfn_to_page(start >> PAGE_SHIFT);
>> +    nr_pages = (end + 1 - start) >> PAGE_SHIFT;
> The start and end may across two pages, although the range is small than
> PAGE_SIZE. You can use following to calculate count of page.
>
> nr_pages = (end >> PAGE_SHIFT) - (start >> PAGE_SHIFT) + 1;

For subpage ranges, you're right. I'll adjust this, thanks!

Regards,
Xunlei

>
> Thanks
> Minfei
>
>> +    if (protect)
>> +            return set_pages_ro(page, nr_pages);
>> +    else
>> +            return set_pages_rw(page, nr_pages);
>> +}
> _______________________________________________
> kexec mailing list
> kexec@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/kexec


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

Reply via email to