>> Perhaps calls to kmap_atomic can be replaced with a
>> kmap_compound(..) that checks
>>
>> __this_cpu_read(__kmap_atomic_idx) + (1 << compound_order(p)) < KM_TYPE_NR
>>
>> before calling kmap_atomic on all pages in the compound page. In
>> the common case that the page is not high mem, a singl
I looked at some kmap_atomic() implementations and I do not think
it supports compound pages.
>>>
>>> Indeed. Thanks. It appears that I can do the obvious thing and
>>> kmap the individual page that is being copied inside the loop:
>>>
>>> kmap_atomic(skb_frag_page(f) + (f_off >> PAGE_S
On Thu, Jun 22, 2017 at 9:36 PM, David Miller wrote:
> From: Willem de Bruijn
> Date: Thu, 22 Jun 2017 16:57:07 -0400
>
>>>
>>> Likewise.
>>>
+ f_off = f->page_offset;
+ f_size = f->size;
+
+ vaddr = kmap_atomic(skb_frag_page(f));
>>>
>>> I
From: Willem de Bruijn
Date: Thu, 22 Jun 2017 16:57:07 -0400
>>
>> Likewise.
>>
>>> + f_off = f->page_offset;
>>> + f_size = f->size;
>>> +
>>> + vaddr = kmap_atomic(skb_frag_page(f));
>>
>> I looked at some kmap_atomic() implementations and I do not think
>> i
>
> Likewise.
>
>> + f_off = f->page_offset;
>> + f_size = f->size;
>> +
>> + vaddr = kmap_atomic(skb_frag_page(f));
>
> I looked at some kmap_atomic() implementations and I do not think
> it supports compound pages.
Indeed. Thanks. It appears that I can do the
From: Willem de Bruijn
Date: Wed, 21 Jun 2017 17:18:05 -0400
> @@ -958,15 +958,20 @@ EXPORT_SYMBOL_GPL(skb_morph);
> */
> int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask)
> {
> - int i;
> int num_frags = skb_shinfo(skb)->nr_frags;
> struct page *page, *head = NULL;
>
From: Willem de Bruijn
Refine skb_copy_ubufs to support compound pages. With upcoming TCP
and UDP zerocopy sendmsg, such fragments may appear.
The existing code replaces each page one for one. Splitting each
compound page into an independent number of regular pages can result
in exceeding limit