Another factor to consider as you look at your networking options is visibility into network traffic.
The Open vSwitch includes embedded traffic monitoring and would give you visibility into all traffic between VMs and from VMs to the outside world: http://blog.sflow.com/2010/05/configuring-open-vswitch.html http://blog.sflow.com/2010/01/open-vswitch.html Traffic visibility simplifies troubleshooting and capacity planning as well as providing a way to charge for network bandwidth. Peter On May 21, 2010, at 2:24 AM, DarkBls wrote: > I would be glad to. > > Here is want we are trying to do : > > We have several blade center with 14 blade (servers) on each one. > We use fedora / centos with KVM on each blades. > > Each blade ( 2 x 4 core) can host up to 24 VM. So with 3 full bladecenter > (BC) we would have something like a thousand VM. > > We want to manage virtual platforms defined like that, a platform is: > > * Several VM among several physical servers > * At least 4 VLAN (admin, data, applications, cluster ...) > > The hypervisors (blades) are all connected together by an admin VLAN not > connected to VM. > > The network is constructed with internal blade switch (BNT or Cisco) and > sometimes a federated stack of switch (3COM /H3C) > > Now we face the problem of the network virtualisation. > > KVM providr a TAP interface for each interface inside the VM. > > We have several ideas. > > Let takes a platform with VLAN : 10 / 20 / 30 / 40 > > 1) Linux Bridges > > Makes a VLAN interface on hypervisor for each VLAN (e.g. bond0.10, bond0.20, > bond0.30, bond0.40). > Make a br10, br20, br30, br40 and construct couple : tap001 / bond0.10 | > tap002 / bond20 and so on. > > PRO: > > Everything is included in the linux distro. Mature and rock solid (?) way to > do > > CONS: > > Not flexible. The VM creation script add the TAP into the Bridge but in case > of migration to another server a lot of work must be done. > Not hardware agnostic. Each VLAN must be declared and properly configured on > BC Switch module and federated switch. > > 2) BNT VM Ready feature > > We are testing this feature. Basicaly the MAC Adress of each virtual > interface in the VM is declared/detected by the switch and a virtual port > configuration makes migration possible betweend Blade. > > PRO / CONS: Still at early phase of trying > > 3) 802.3ad / QinQ > > Same than 1) but the hardware switch add a second VLAN tag on frame to be > able to use a transport BackBone VLAN among all servers. > > PRO: > > Seems fast because of hardware support > > CONS: > > Still using linux bridge (and still have to manage them). > Not welle defined feature among hardware manufacturer (BNT doesn't do, > cisco/3COM yes). > > 4) OvS with GRE > > OvS deployed on each server (blade) and connected to a distant configuration > DB. Each OvS "switch" are cascading to antoher one through Gre Tunnel. > > PRO: > > Nothing to do on hardware in case of adding / removing virtual platform. > Can migrate a full switch with VM on another server without doing > reconfiguration if the IP used for GRE doesn't change > > CONS: > > Cascading and no stacking. Beware of loop (since there is no STP on OvS) and > single point of failure in topology > Scalability / performance ? > Would have prefered GRE stack instead of cascading to be able to construct > one (or several) virtual switches through hardware. > > 4) Multicast networking with KVM native feature > > All VLAN is on a different multicast adress. > > PRO: > > Native feature > VERY flexible > hardware agnostic > > CONS: > > Hard to add physical server on a Multicast virtual VLAN > > Would have loved to have OvS stacked switched with multicast :D > > 5) GVRP > > Use a GVRP client on seach hypervisor to configure VLAN on each hardware > > PRO: > > Bad support among hardware manufacturer > > > No silver bullet so far. > > Since our projects are bigger and bigger each day (we are talking about > several hundred euros of hardware and counting), I'm interrested to have your > point of view on this. > > I have some drawing if you want to. > > Cheer. > > > > ----- Message d'origine ---- > De : Justin Pettit <jpet...@nicira.com> > À : DarkBls <dark...@yahoo.com> > Envoyé le : Ven 21 mai 2010, 9h 45min 42s > Objet : Re: Re : Re : [ovs-discuss] OvS 1.0.0 Compile error on fedora 13 > > Interesting. We're always curious how OVS is being used in the real world. > Are you comfortable talking about what you're working on? We are always > happy to hear about our software being used in large deployments. > > --Justin > > > On May 21, 2010, at 12:11 AM, DarkBls wrote: > >> I will keep you informed for sure. I'm working for a big project (several >> hundred of VM to manage ) and I need something robust, felxible and fast for >> network virtualisation. >> >> >> >> ----- Message d'origine ---- >> De : Justin Pettit <jpet...@nicira.com> >> À : DarkBls <dark...@yahoo.com> >> Cc : Ben Pfaff <b...@nicira.com>; discuss@openvswitch.org >> Envoyé le : Ven 21 mai 2010, 9h 08min 43s >> Objet : Re: Re : [ovs-discuss] OvS 1.0.0 Compile error on fedora 13 >> >> On May 20, 2010, at 11:56 PM, DarkBls wrote: >> >>> I confirme it compiles with >>> <sys/stat.h> >> >> Great. I pushed the fix earlier today, so it will be fixed in the next >> release. Thanks for reporting it. >> >>> Just have now to understand how to tackle the rest of configuration trap :D >>> (DB creation, GRE switch virtual cascading ...) >> >> Good luck. Let us know if you get really stuck. The DB takes a little >> while to get used to, but I think you will find it more flexible. I use >> "ovs-vsctl list <table>" a lot to get an understanding of how things are >> actually set up. We plan to add better tools for dumping the current >> configuration of the system, since it requires a pretty low-level >> understanding of the system now. >> >> --Justin >> >> >> > > > > > _______________________________________________ > discuss mailing list > discuss@openvswitch.org > http://openvswitch.org/mailman/listinfo/discuss_openvswitch.org _______________________________________________ discuss mailing list discuss@openvswitch.org http://openvswitch.org/mailman/listinfo/discuss_openvswitch.org