Hey Sagar, It does sound like you need to use eth1 for FLAT_INTERFACE - but note this parameter is used when your network is first created with nova-manage, so if you change this you will need to re-run stack.sh (it isn't enough to update your nova.conf), which is probably best to do after a reboot in this case, to reset your network to its original state.
My guess is that your instance network is working on the first server because it is running nova-network, and is the gateway machine for the instances. Thus instances on that host don't have to go to the wire to get dhcp addresses, hit the gateway, etc. It seems likely that nova-net gave the gateway address 192.168.3.1 to br100 on your master node, and then connected eth0 to br100. On the second host, nova probably created br100 for instances and then added the eth0 flat_interface to it (which is not connected to any network). So yeah, it doesn't seem like there would be a way for instances on the second host to talk to their gateway, which would explain the lack of connectivity for the instances on the second host. Making the instance bridges plug into eth1 (as FLAT_INTERFACE) looks like it may allow the instance-gateway communication. Some other things to check while you are working through this: * Are your instances getting ip addresses? * Can your instances ping their gateway 192.168.3.1? * Can you ping the instances from the master/network host? tcpdump is very useful here, so you can see where ping/dhcp protocols are breaking. PUBLIC_INTERFACE seems right as eth0, but won't matter till you start playing with floating addresses. You shouldn't need VLAN_INTERFACE, but it doesn't hurt that it is there. Anthony On Thu, Dec 29, 2011 at 12:00 AM, Frost Dragon <frostdragon...@gmail.com>wrote: > Hi, > 192.168.3.0/24 is the fixed_range in my localrc for both nodes. My > second node doesn't have direct access to the public network. It has only > one interface (eth1) connected to the private network. It has an ip of > 192.168.2.2. Is there a way set this node up as a compute node without > connecting eth0 to the public network? I read that deploying openstack is > possible without 2 NICs. Also, would putting my vms in my manage network > help? Because the VMs on my master node work fine. Its only on the compute > nodes that I'm having issues. I was thinking that it had something to do > with all my nova.conf parameters pointing to an unused eth0 interface on my > compute node (please see my previous posts for more details on my network > setup) > > Thanks and regards, > Sagar > > > > Is each machine configured with an address in the 192.168.3.0/24 range, >> and is that network otherwise configured? It sounds like you have not >> fully configured the instance network. Typically for deployments with >> FlatDHCP, there are at least 3 networks - one public, one management, and >> one for instances. Each of these is usually on its on physical network >> interface or vlan. >> >> If this is just for testing, you can also try putting your vms inside >> your manage-net (say the upper part of 192.168.2.128/25, which can be >> convenient for testing but not generally a good idea. >> >> Anthony >> >>
_______________________________________________ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp