hi Samuel,
Thanks for resending.

I'm CCing Ben if he as more points.


At a high level, IP helper uses the host Hyper-V's ARP and IP routing stack 
functionality. At a high level it consists of the following parts:
1. Data structures the maintain the L2 and L3 cache.
2. Functionality to Query the L2 and L3.
   * We rely on the host's routing table to figure out the best interface to 
reach a destination. This gives the L3 source IP.
   * We rely on the host's ARP table to figure out the mapping between the L3 
destination and its L2 address.
3. A thread since we cannot call into IP helper in DPC level as DISPATCH_LEVEL.

Some advantages of doing so are:
a. We don't have to implement an ARP stack as well as a routing table, and the 
related functionality of:
   * Populating the routing table
   * Listening to ARP, RARP and GARP messages and marking entries as stale etc. 
It is easer with IP helper since the host provides a nice callback.
b. It is easier to support multiple VTEPs in terms of IP routing stack (Even 
though OVS on Hyper-V does not support multiple VTEP today).
c. In the future when underly network (ie. VTEP IP) is an IPv6 address, it is 
complicated to implement and ND6 stack. I mean, Nd6 is a beast in itself 
compared to ARP.
d. We can work in a mode where there are no PIF bridges (though we don't 
support this mode today).

I am all for simplifying the IP helper code if you think of better ways of 
doing it, but changing the model to do ARP ourselves is a little non-scalable I 
think. Even with an ARP stack implementation, we cannot get away from doing #1 
and #2.

But implementing our own ARP stack, maybe we can get rid of #3 by processing 
the ARP packets inline, but still we might need a thread to do cleanup of stale 
ARP entries. Otherwise, the data structure can bloat up.

While reviewing the cloudbase code, I inferred that the reason for implementing 
ARP stack was to be able to operate in the "AllowManagementOS = FALSE" mode. If 
this discussion is leading into that, then it is a much bigger discussion.

thanks,
Nithin


On Aug 20, 2014, at 11:09 AM, Samuel Ghinet <sghi...@cloudbasesolutions.com>
 wrote:

> Re-sent.
> ________________________________
> From: Samuel Ghinet
> Sent: Friday, August 08, 2014 8:57 PM
> To: dev@openvswitch.org
> Subject: OvsIpHelper vs the ARP method
> 
> Hello guys,
> 
> I wanted to ask you about this since a week or so.
> 
> I have seen that you use the OvsIpHelper to find dest eth for a given target 
> ip (to be used for destination hypervisor).
> I find the OvsIpHelper functionality quite complicated, and with calls to it 
> in quite some places within the project (as in OvsConnectNic).
> 
> The method we used in our implementation was:
> o) have an arp table (list of arp entries: mappings between eth address and 
> ip address)
> o) have a lock (for snyc)
> 
> And the operations:
> o) add / update an arp entry whenever you receive an ARP reply
> o) originate an ARP request whenever a tunnel is added (for a dest hyper-v), 
> or when trying to out to tunneling port and you don't have the destination 
> eth address.
> 
> I had found the OvsIpHelper functionality quite intricate, as it requires the 
> creation of a new thread, more lists and locks, it is a lot of code, an IOCTL 
> for it, and also more dependencies in code. Also, quite any information that 
> we'd want to gather about the physical NIC we can get via the nic OIDs or 
> issuing OID requests.
> 
> Could you please share with me some reasons you have for why you consider the 
> OvsIpHelper approach better?
> 
> Thanks!
> Samuel


_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to