Well one NIC is already a problem and I would encourage you to use VLANs on top of a NIC or bonding interface like I did:
=== ifcfg-eth0 === DEVICE=eth0 TYPE=Ethernet BOOTPROTO=none ONBOOT=yes BRIDGE=cloudbr0 === ifcfg-eth0.2578 === DEVICE=eth0.2578 VLAN=yes TYPE=Ethernet BOOTPROTO=none ONBOOT=yes BRIDGE=cloud-storage === ifcfg-eth0.2610 === DEVICE=eth0.2610 VLAN=yes TYPE=Ethernet BOOTPROTO=none ONBOOT=yes BRIDGE=cloud-mgmt === ifcfg-cloudbr0 === DEVICE=cloudbr0 TYPE=Ethernet BOOTPROTO=none ONBOOT=yes IPADDR=Management Hypervisor IP NETMASK=xxx GATEWAY=xxx DNS1=xxx DNS2=xxx DOMAIN=xxx === ifcfg-cloud-mgmt === DEVICE=cloud-mgmt TYPE=Bridge ONBOOT=yes BOOTPROTO=none DELAY=0 === ifcfg-cloud-storage === DEVICE=cloud-storage TYPE=Bridge ONBOOT=yes BOOTPROTO=none DELAY=0 eth0.2610 and eth0.2578 are VLANIDs I created for the storage and mgmt network only. As guest network I just specified cloudbr0 as KVM label and put a LAN with a VLANID ontop as defined in CS guest LANs. CS will create additional bridged ifcfg-cloudVirBR<VLANID> automatically. Thanks, Bjoern
