Florian Westphal <f...@strlen.de> wrote:
> we...@ucloud.cn <we...@ucloud.cn> wrote:
> > diff --git a/net/netfilter/nf_flow_table_ip.c 
> > b/net/netfilter/nf_flow_table_ip.c
> > index 0016bb8..9af01ef 100644
> > --- a/net/netfilter/nf_flow_table_ip.c
> > +++ b/net/netfilter/nf_flow_table_ip.c
> > -   neigh_xmit(NEIGH_ARP_TABLE, outdev, &nexthop, skb);
> > +   if (family == NFPROTO_IPV4) {
> > +           iph = ip_hdr(skb);
> > +           ip_decrease_ttl(iph);
> > +
> > +           nexthop = rt_nexthop(rt, 
> > flow->tuplehash[!dir].tuple.src_v4.s_addr);
> > +           skb_dst_set_noref(skb, &rt->dst);
> > +           neigh_xmit(NEIGH_ARP_TABLE, outdev, &nexthop, skb);
> > +   } else {
> > +           const struct net_bridge_port *p;
> > +
> > +           if (vlan_tag && (p = br_port_get_rtnl_rcu(state->in)))
> > +                   __vlan_hwaccel_put_tag(skb, p->br->vlan_proto, 
> > vlan_tag);
> > +           else
> > +                   __vlan_hwaccel_clear_tag(skb);
> > +
> > +           br_dev_queue_push_xmit(state->net, state->sk, skb);
> 
> Won't that result in a module dep on bridge?
> 
> Whats the idea with this patch?
> 
> Do you see a performance improvement when bypassing bridge layer? If so,
> how much?
> 
> I just wonder if its really cheaper than not using bridge conntrack in
> the first place :-)

Addendum: Did you look at the nftables fwd expression?  Maybe you can use
it as a simpler way to speed things up?

Reply via email to