> 
> > This is useful and I think we would like to use it here. I have a few
> questions.
> 
> Great!  I really hope this moves the community in the direction of more
> realistic test cases.
> 
> > How come the pipeline is set up with metadata instead of a chain of
> > tables as I presume that would also work?
> 
> I would expect both approaches to produce the same megaflows and
> perform the same in the slow path, so it shouldn't really matter either way.
> That said, there are some minor aesthetic reasons I choose to go with
> metadata.  (1) there are only 256 tables, sounds like a lot but when you start
> stringing simulated middleboxes together in a service chain, it's easy to run
> out.  (2) our internal NSX controller uses metadata instead of multiple 
> tables,
> so I'm used to it.
> 
> At any rate, doesn't matter much, if you feel strongly about it feel free to
> change it.

I don't but I just wanted to see if I was missing some key point. I think the
key point that you mention is that the purpose is to produce the same megaflows
so it doesn’t really matter what the Openflow pipeline looks like.

> 
> > There are a lot of destination ip addresses. What is the use case this is
> simulating?
> > I can't imagine there would be that many mac addresses on the
> > hypervisor unless it was hosting containers?
> 
> When I wrote this test case, I was imagining OVS being used as more of a
> firewall/router in the core of the datacenter, rather than as just the
> hypervisor vswitch.  We commonly use OVS as a gateway between and
> overlay network using network virtualization, and the WAN.  In this case OVS
> is responsible for routing over many many IP addresses (1 per VM in the
> datacenter typically).

Ok, your use case is clear to me now

> 
> That said, for the typical NFV case, perhaps it's a bit overkill.  We can 
> always
> add more test cases if you'd like.  Then again, I suspect most NFV workloads
> would benefit greatly if basic routing/firewalling were offloaded to OVS
> rather than done in VMs.
> 
> > You talk about network virtualization workloads but it doesn’t do any
> > matching on, for example, vlan vid or vxlan tunnel ID?
> 
> Yep, it would make a lot of sense to add in tunneling support.  I don't 
> suspect
> it would matter much, but hard to say without testing it =).
> 
> To be super clear, I don't consider this script the definitive best case
> benchmark we could do.  I just wanted to take a step away from trivial port
> to port forwarding tests.  I.E. I'm not claiming this is realistic, just more
> realistic than what we were doing before.  I really hope that as a community
> we can get behind it and evolve it into something much more realistic as time
> goes on.

I got you. I guess an Openstack deployment would be quite typical as well. We've
looked at this is some detail to understand what kind of megaflows get 
installed.
It's quite diverse depending on your setup and is difficult to understand until
you actually run it due to the use of the normal action and a number of bridges.

> 
> > This would be quite necessary! Can you give an idea what kind of flows
> > we should be generating to test it or is it just random ip, mac addresses 
> > and
> l4 ports?
> 
> So for now I've been running through it a couple of packet traces I've
> collected which, unfortunately, I can't share publicly.  I had been planning 
> to
> write a script which can generate "realistic" packet traces for doing this 
> sort
> of testing based on the types of things I've seen in the traces I've 
> collected,
> but I haven't quite gotten to it yet.

I look forward to seeing this as it will make this a lot easier to use.

> 
> Ethan
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to