Forgot to add everyone in the reply..
-- Forwarded message -
From: Md. Islam
Date: Mon, Nov 19, 2018 at 11:35 PM
Subject: Re: [PATCH RFC net-next] net: SAIL based FIB lookup for XDP
To:
On Sun, Nov 18, 2018 at 12:42 PM David Ahern wrote:
>
> On 11/11/18 7:25 PM, Md.
This patch implements SAIL[1] based routing table lookup for XDP. I
however made some changes from the original proposal (details are
described in the patch). This changes decreased the memory consumption
from 21.94 MB to 4.97 MB for my example routing table with 400K
routes.
This patch can perfor
your Poptrie implementation.
>
> Would you please wait for a while until NTT Communications
> decide its response. We will inform you as soon as it is decided.
>
> Best regards,
> Yasu
>
>> -Original Message-
>> From: Md. Islam [mailto:misl...@kent.edu]
>> S
On Tue, Sep 4, 2018 at 12:14 PM, Md. Islam wrote:
>
> On Tue, Sep 4, 2018, 6:53 AM Jesper Dangaard Brouer
> wrote:
>>
>> Hi Md. Islam,
>>
>> People will start to ignore you, when you don't interact appropriately
>> with the community, and you ignore
On Mon, Aug 27, 2018 at 12:56 PM, David Ahern wrote:
> On 8/27/18 10:24 AM, Stephen Hemminger wrote:
>>
>> Also, as Dave mentioned any implementation needs to handle multiple
>> namespaces
>> and routing tables.
>>
>> Could this alternative lookup be enabled via sysctl at runtime rather than
>>
On Mon, Aug 27, 2018 at 12:24 PM, Stephen Hemminger
wrote:
> On Sun, 26 Aug 2018 22:28:48 -0400
> "Md. Islam" wrote:
>
>> This patch implements Poptrie [1] based FIB lookup. It exhibits pretty
>> impressive lookup performance compared to LC-trie. This poptrie
>
This patch implements Poptrie [1] based FIB lookup. It exhibits pretty
impressive lookup performance compared to LC-trie. This poptrie
implementation however somewhat deviates from the original
implementation [2]. I tested this patch very rigorously with several
FIB tables containing half a million
This patch implements Poptrie [1] based FIB lookup. It exhibits pretty
impressive lookup performance compared to LC-trie. This poptrie
implementation however somewhat deviates from the original
implementation [2]. I tested this patch very rigorously with several
FIB tables containing half a million
On Sun, May 13, 2018 at 9:24 AM, Jason A. Donenfeld wrote:
> On Sat, May 12, 2018 at 4:07 AM, Md. Islam wrote:
>> I'm not an expert on this, but it looks about right.
>
> Really? Even zeroing between headers_start and headers_end? With the
> latest RHEL 7.5 kernel
I'm not an expert on this, but it looks about right. You can take a
look at build_skb() or __build_skb(). It shows the fields that needs
to be set before passing to netif_receive_skb/netif_rx.
On Fri, May 11, 2018 at 6:56 PM, Jason A. Donenfeld wrote:
> Hey Netdev,
>
> A UDP skb comes in via the
Gotcha. I'm working on it. I've created a function that creates
sk_buff from xdp_buff. But still getting an error while the sk_buff is
being processed by tcp. I will send you the patch once I'm done.
Thanks!
On Thu, Apr 5, 2018 at 10:55 PM, David Ahern wrote:
> On 4/3/18
On Wed, Apr 4, 2018 at 2:16 AM, Jesper Dangaard Brouer
wrote:
>
> On Sun, 1 Apr 2018 20:47:28 -0400 Md. Islam" wrote:
>
>> [...] More specifically, header parsing and fib
>> lookup only takes around 82 ns. This shows that this could be used to
>> implement line
On Tue, Apr 3, 2018 at 9:16 PM, David Ahern wrote:
> On 4/1/18 6:47 PM, Md. Islam wrote:
>> This patch implements IPv4 forwarding on xdp_buff. I added a new
>> config option XDP_ROUTER. Kernel would forward packets through fast
>> path when this option is enabled. But it
Yes, I'm also seeing good performance improvement after adding
likely() and prefetch().
On Sun, Apr 1, 2018 at 2:50 PM, Stephen Hemminger
wrote:
> On Sun, 1 Apr 2018 20:31:21 +0200
> Anton Gary Ceph wrote:
>
>> As the Linux networking stack is growing, more and more protocols are
>> added, incr
This patch implements IPv4 forwarding on xdp_buff. I added a new
config option XDP_ROUTER. Kernel would forward packets through fast
path when this option is enabled. But it would require driver support.
Currently it only works with veth. Here I have modified veth such that
it outputs xdp_buff. I c
On Mon, Mar 26, 2018 at 9:01 PM, Md. Islam wrote:
> On Mon, Mar 26, 2018 at 10:21 AM, David Miller wrote:
>> From: "Md. Islam"
>> Date: Fri, 23 Mar 2018 02:43:16 -0400
>>
>>> +#ifdef CONFIG_XDP_ROUTER
>>> +//if IP forwa
On Mon, Mar 26, 2018 at 10:21 AM, David Miller wrote:
> From: "Md. Islam"
> Date: Fri, 23 Mar 2018 02:43:16 -0400
>
>> +#ifdef CONFIG_XDP_ROUTER
>> +//if IP forwarding is enabled on the receiver, create xdp_buff
>> +//from skb and call
Hi
This patch implements IPv4 forwarding on xdp_buff. Currently it only
works with VETH. It forwards packets as soon as a veth receives a
packet. Currently VETH uses slow path for packet forwarding which
requires packet to go through upper layers. However this patch
forwards the packet as soon as
On Mon, Feb 12, 2018 at 11:29 AM, Masami Hiramatsu wrote:
> On Mon, 12 Feb 2018 00:08:46 -0500
> "Md. Islam" wrote:
>
>> Recently tcp_probe kernel module has been replaced by trace_event. Old
>> tcp_probe had full=0 option where it only takes a snapshot only when
Recently tcp_probe kernel module has been replaced by trace_event. Old
tcp_probe had full=0 option where it only takes a snapshot only when
congestion window is changed. However I did not find such
functionality in trace_event. This is why I implemented this
"conditional trace_event" where a snapsh
Hi
I'm using tcp_probe tracepoint as [1]. It takes a snapshot each time
tcp_rcv_established() is called. However I need to take a snapshot
only when congestion window changes. Old tcp_probe had full=0 option
to achieve this. Is there a way to achieve this using tcp_probe
tracepoint?
Many thanks
T
In Kernel 4.15.0+, Netem does not work properly.
Netem setup:
tc qdisc add dev h1-eth0 root handle 1: netem delay 10ms 2ms
Result:
PING 172.16.101.2 (172.16.101.2) 56(84) bytes of data.
64 bytes from 172.16.101.2: icmp_seq=1 ttl=64 time=22.8 ms
64 bytes from 172.16.101.2: icmp_seq=2 ttl=64 time
22 matches
Mail list logo