Hey everyone - we've been doing some OpenStack deployments on different datacenter architectures in our lab. Many of the architectures we work with divide the management plane from the data plane by VRF. I.e. there is no routing between a tenant VM traffic and the management interfaces of the datacenter infrastructure. Applying this same model to OpenStack we put the OpenStack services onto the management network. Tenants users are given access to the management VRF so they can utilize the APIs. Tenant VM traffic is kept in the data VRF. That works pretty well with a couple exceptions.
So the first expect ion we noticed was the nova metadata service. Here, the tenant VM needs access to an API on the management VRF. You can solve this via config drive, or through neutron's metadata proxy. Not too bad. The second exception we hit to this policy is Swift. Swift APIs actually need to be accessed by Tenant VMs. Not a big deal, just put the swift servers into the data management VRF. But if you are wanting to use Ceph, and have Ceph be your storage for both Swift, Cinder, and Glance, now you've got a problem b/c you need access to the cluster from both VRFs. I'm just working through this stuff in my lab, so I'm hoping to get some feedback from the real world. Has anyone setup their OpenStack cluster with the management and data planes segmented by VRF? And did you have to run into this or any other issue of needing traffic to cross between VRFs? If so, how did you work around? Dual homed servers? Fusion router? Something more elegant? Thx, britt _______________________________________________ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators