Well it's quite difficult to debug your issue.
I can tell only what helped me in such cases :

1. Get the vnet interfaces from your VM (virsh dumpxml s-xx-VM)
2. Get bridge infos using brctl show
3. Log into the system VM from the hypervisor as root using ssh -i
   .ssh/id_rsa.cloud [email protected] -p 3922 ( The IP is the link
   local IP)
4. Check each interface if you can ping gateways

Bye,
Bjoern

On 04/03/2013 03:14 AM, Valery Fongang wrote:
Hi,

I have the following setup:

- 1 Physical Host with single NIC on Centos 6.3+KVM
- 1 Virtual Cloud Management Server on a separate Physical Host from my KVM Host
- All 2 Physical servers plug into the same switch a NETGEAR GS748T
- My KVM host is connected on Port 1 where I have untagged VLAN trunk.
- My guest network VLAN range is set to 200-300.
- System VMs and Instances can be created with no issue. The Centos Template 
has been downloaded and I am able to spin VMs with it.

THE ISSUE:

- My system routers are not able to communicate over their public IPs.
- I am not able to ping my system Router Public IP from a guest VM however I am 
able to ping all the system VMs from guest VMs.
- My guest VMs are unable to reach the Internet.

With the NETGEAR switch I have I am not able to trunk a range of VLANs but I 
make sure I manually add any VLAN used by Cloudstack into the switch and add  
port 1 where my host is connected to that VLAN.

I have played around with my traffic labels  and 
/etc/cloud/agent/agent.properties but I am still unable to get this working. 
Any idea how to get my guest VMs to access the public network.

Thanks,



Reply via email to