Uladzislau Rezki writes:
> Hello, Daniel
>
>>
>> @@ -1294,14 +1299,19 @@ static bool __purge_vmap_area_lazy(unsigned long
>> start, unsigned long end)
>> spin_lock(&free_vmap_area_lock);
>> llist_for_each_entry_safe(va, n_va, valist, purge_list) {
>> unsigned long nr = (
Hello, Daniel
>
> @@ -1294,14 +1299,19 @@ static bool __purge_vmap_area_lazy(unsigned long
> start, unsigned long end)
> spin_lock(&free_vmap_area_lock);
> llist_for_each_entry_safe(va, n_va, valist, purge_list) {
> unsigned long nr = (va->va_end - va->va_start) >> PAG
On 10/29/19 7:20 AM, Daniel Axtens wrote:
> Hook into vmalloc and vmap, and dynamically allocate real shadow
> memory to back the mappings.
>
> Most mappings in vmalloc space are small, requiring less than a full
> page of shadow space. Allocating a full shadow page per mapping would
> therefor
Hook into vmalloc and vmap, and dynamically allocate real shadow
memory to back the mappings.
Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different mapp