On Tue, Apr 3, 2018 at 9:16 PM, David Ahern <dsah...@gmail.com> wrote:
> On 4/1/18 6:47 PM, Md. Islam wrote:
>> This patch implements IPv4 forwarding on xdp_buff. I added a new
>> config option XDP_ROUTER. Kernel would forward packets through fast
>> path when this option is enabled. But it would require driver support.
>> Currently it only works with veth. Here I have modified veth such that
>> it outputs xdp_buff. I created a testbed in Mininet. The Mininet
>> script (topology.py) is attached. Here the topology is:
>>
>> h1 -----r1-----h2 (r1 acts as a router)
>>
>> This patch improves the throughput from 53.8Gb/s to 60Gb/s on my
>> machine. Median RTT also improved from around .055 ms to around .035
>> ms.
>>
>> Then I disabled hyperthreading and cpu frequency scaling in order to
>> utilize CPU cache (DPDK also utilizes CPU cache to improve
>> forwarding). This further improves per-packet forwarding latency from
>> around 400ns to 200 ns. More specifically, header parsing and fib
>> lookup only takes around 82 ns. This shows that this could be used to
>> implement linerate packet forwarding in kernel.
>>
>> The patch has been generated on 4.15.0+. Please let me know your
>> feedback and suggestions. Please feel free to let me know if this
>> approach make sense.
>
> This patch is not really using eBPF and XDP but rather trying to
> shortcircuit forwarding through a veth pair.
>
> Have you looked at the loss in performance with this config enabled if
> there is no r1? i.e., h1 {veth1}  <---> {veth2} / h2. You are adding a
> lookup per-packet to the Tx path.

Yes, it works. If there is no r1, it falls back to dev_forward_skb()
to forward the packet. My main objective here was to implement router
datapath. Personally I feel like it should be a part of kernel rather
than an eBPF. But still looking forward to your patch and performance
number.

>
> Have you looked at what I would consider a more interesting use case of
> packets into a node and delivered to a namespace via veth?
>
>    +--------------------------+---------------
>    | Host                     | container
>    |                          |
>    |        +-------{ veth1 }-|-{veth2}----
>    |       |                  |
>    +----{ eth1 }------------------
>
> Can xdp / bpf on eth1 be used to speed up delivery to the container?

I didn't consider that, but it sounds like an important use case. How
do we determine which namespace gets the packet?

Reply via email to