Thanks, James for the explanation. it make more sense now.<http://now.it> it is 
possible that a instances on same tenant network reside on different compute 
nodes right? how do I tell which compute node a instance is on?

Thanks,
Yang

On Jun 24, 2015, at 10:27 AM, James Denton 
<james.den...@rackspace.com<mailto:james.den...@rackspace.com>> wrote:

Hello.

all three nodes will have eth0 on management/api network. since I am using ml2 
plugin with vlan for tenant network, I think all compute node should have eth1 
as the second nic on provider network. Is this correct?  I understand provider 
network is for instance to get external access  to internet, but how is 
instance live on compute1  communicate with instance live on compute2? are they 
also go through provider network?

In short, yes. If you’re connecting instances to vlan “provider” networks, 
traffic between instances on different compute nodes will traverse the 
“provider bridge”, get tagged out eth1, and hit the physical switching fabric. 
Your external gateway device could also sit in that vlan, and the default route 
on the instance would direct external traffic to that device.

In reality, every network has ‘provider’ attributes that describe the network 
type, segmentation id, and bridge interface (for vlan/flat only). So tenant 
networks that leverage vlans would have provider attributes set by Neutron 
automatically based on the configuration set in the ML2 config file. If you use 
Neutron routers that connect to both ‘tenant’ vlan-based networks and external 
‘provider’ networks, all of that traffic could traverse the same provider 
bridge on the controller/network node, but would be tagged accordingly based on 
the network (ie. vlan 100 for external network, vlan 200 for tenant network).

Hope that’s not too confusing!

James

On Jun 24, 2015, at 8:54 AM, YANG LI 
<yan...@clemson.edu<mailto:yan...@clemson.edu>> wrote:

I am working on install openstack from scratch, but get confused with network 
part. I want to have one controller node, two compute nodes.

the controller node will only handle following services:
glance-api
glance-registry
keystone
nova-api
nova-cert
nova-conductor
nova-consoleauth
nova-novncproxy
nova-scheduler
qpid
mysql
neutron-server

compute nodes will have following services:
neutron-dhcp-agent
neutron-l3-agent
neutron-metadata-agent
neutron-openvswitch-agent
neutron-ovs-cleanup
openvswtich
nova-compute

all three nodes will have eth0 on management/api network. since I am using ml2 
plugin with vlan for tenant network, I think all compute node should have eth1 
as the second nic on provider network. Is this correct?  I understand provider 
network is for instance to get external access  to internet, but how is 
instance live on compute1  communicate with instance live on compute2? are they 
also go through provider network?
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : 
openstack@lists.openstack.org<mailto:openstack@lists.openstack.org>
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to