Hi Mark,
Thank you for your response,
Please find my reply below,

Regards
_Sugesh


> -----Original Message-----
> From: Gray, Mark D
> Sent: Thursday, July 21, 2016 5:34 PM
> To: Chandran, Sugesh <sugesh.chand...@intel.com>; Jesse Gross
> <je...@kernel.org>
> Cc: dev@openvswitch.org; Giller, Robin <robin.gil...@intel.com>
> Subject: RE: [ovs-dev] Considering the possibility of integrating DPDK generic
> classifier APIs in OVS.
> 
> 
> 
> > -----Original Message-----
> > From: Chandran, Sugesh
> > Sent: Thursday, July 21, 2016 11:56 AM
> > To: Gray, Mark D <mark.d.g...@intel.com>; Jesse Gross
> > <je...@kernel.org>
> > Cc: dev@openvswitch.org; Giller, Robin <robin.gil...@intel.com>
> > Subject: RE: [ovs-dev] Considering the possibility of integrating DPDK
> > generic classifier APIs in OVS.
> >
> > Hi Mark & Jesse
> >
> > Thank you for looking into the the proposal, Please find my answers
> > inline below.
> >
> > Regards
> > _Sugesh
> >
> > > -----Original Message-----
> > > From: Gray, Mark D
> > > Sent: Wednesday, July 20, 2016 7:17 PM
> > > To: Jesse Gross <je...@kernel.org>
> > > Cc: Chandran, Sugesh <sugesh.chand...@intel.com>;
> > > dev@openvswitch.org; Giller, Robin <robin.gil...@intel.com>
> > > Subject: RE: [ovs-dev] Considering the possibility of integrating
> > > DPDK
> > generic
> > > classifier APIs in OVS.
> > >
> > > >
> > > > On Wed, Jul 20, 2016 at 6:43 PM, Gray, Mark D
> > > > <mark.d.g...@intel.com>
> > > > wrote:
> > > > >  [Gray, Mark D] I think we should focus on one or two use cases
> > > > > rather than a general offload like you discuss below. A general
> > > > > offload involves a huge amount of code churn and there are a lot
> > > > > of difficulties,
> > > > some that you have highlighted below.
> > > > > A more focused implementation will flush out any issues with the API.
> > > > > In particular, the VxLAN use case that you mentioned above and
> > > > > perhaps the offload of the hash calculation (but the generic
> > > > > filtering api would also need to support generation of hashes)
> > > > > could be two targets for
> > > > this DPDK api.
> > > >
> > > > I agree that targeting a specific use case is a good idea (as well
> > > > as your other comments). It's probably worthwhile talking to John
> > > > Fastabend about this (also from Intel) since he has tried to
> > > > something similar for the past several years in Linux. Many of the
> > > > general problems listed in the original email turn out to be very 
> > > > difficult.
> > > > (Examples include capabilities; describing flows in a hardware
> > > > independent manner is something that OpenFlow tried to tackle for
> > > > a long time; which flows to offload in the face of table size
> > > > limits while maintaining correct forwarding behavior; etc.)
> > >  [Gray, Mark D]
> > > Yes, John and I have discussed a lot of this in depth and we have
> > > done whiteboarding of possible hw offload designs in OVS which is
> > > why I am
> > quite
> > > familiar with the issues.
> > [Sugesh] I feel that the design must be considering all the
> > capabilities Of DPDK APIs though it uses only for the VxLAN and hashing for
> now.
> > The earlier implementation installs the flows in hardware when a flow
> > get populated in the datapath. Everything happens in the datapath.
> > The main focus of that implementation is to just optimize the VxLAN
> > traffic, so we haven’t consider other cases where flow director can be
> useful.
> > The generic APIs can do much more than just a flow director. So having
> > a generic extendable design in OVS helps in many ways.
> > Comments?
> >
> 
> [Gray, Mark D]
> Ok, this makes sense, you are also looking at this from the viewpoint of using
> the DPDK generic filter API for other potential usecases. That’s good, it 
> would
> be good if more people from the ovs community would look at it aswell. The
> API seems like a good idea for DPDK and if it has a software implementation
> and broad NIC pmd support, it could be useful.
> 
> > >
> > > >
> > > > I think the VXLAN acceleration was a good use case since the
> > > > vswitch is the owner of the tunnel endpoint and therefore is
> > > > better equipped to make policy decisions. The main concern that I
> > > > had with the previous implementation was that it was making
> > > > assumptions about the contents of the inner flow based on the UDP
> > > > source port, which is not really safe since that is just a hash.
> > > [Gray, Mark D]
> > > I read your comments on this I had a look through Sugesh's code to
> > > try and see where this was happening. I couldn’t see it but I agree
> > > that the source port is basically random and it's only a hash of the inner
> flow by convention.
> > > Sugesh, is Jesse's concern valid in your implementation? I thought
> > > it was actually extracting the inner header and you weren't making
> > > an assumption about the source port?
> > [Sugesh] Its not possible to do the inner packet extraction and lookup
> > because the inner packet miniflow also include metadata from outer
> > header. The inner packet matches on hash + tunnel flag to match the
> > flow in the last implementation.
> > The proposed design may solve this by making  the control plane to
> > insert two set rules in the datapath, for VxLAN tunnel traffic set :1
> > (Software fallback path) Rule 1: Outer tunnel header rule Rule 2 :
> > Inner header rule.
> > Set:1 is purely software rules and that expected to be present in the
> > datapath until the Hardware flow insertion is complete.
> >
> > Set:2 (Software + Hardware path)
> > Rule 1:- Outer header hardware flow director rule, Programmed by OVS
> > control plane.
> > Rule 2:- Inner header software rule(matches on miniflow only from
> > inner header + hardware reported tunnel ID). Please note there is no
> > tunnel metadata from the outer header uses here.
> >
> > To start off,
> > OVS can verify the hardware capabilities when a user adds a port. For
> > now its only the flow director and hashing.
> > For every flow insertion request To/From the port, control plane has
> > to check if it is a candidate for hardware flow and insert the flows
> > in NICs accordingly.
> > We can assume that every hardware flow associate with an ID and
> > queue(Or either of one).
> > The flow lookup and matching logic has to be changed to handle these
> > parameters too.
> > Similarly this should be represented in the openflow as well. Can we
> > use any openflow registers for this?
> > Please let me know your thoughts on this.
> >
> [Gray, Mark D]
> From your description here, I don't think you are making an assumption
> about the inner flow based on the source port as Jesse suggested.
> 
> The first match is a hardware match on the outer header which generates a
> unique ID. This ID is unique for each tunnel endpoint. The second match,
> matches on the inner header + the unique ID from the first match. This
> combination of the unique ID + inner header should be unique for that flow
> and can correctly identify flows of that type?
[Sugesh] : But still its not as unique as matching on packet fields per se.
> 
> >
> > > >
> > > > Providing hashes or other flow lookup hints that software can use
> > > > to accelerate lookups while still actually doing the forwarding
> > > > itself are also good examples of relatively simpler offloads
> > > > because there is no danger of violating rules. If a flow can't be
> > > > offloaded to a hardware flow table then the worst that happens is
> > > > performance suffers vs. possibly picking a (wrong) lower priority flow.

_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to