Hi Andrei,

You’re confusing the matter with your masking of public IP ranges. You said you 
have “2 x Public IP ranges with /26 netmask” – but since you are masking them 
out with X’s your email doesn’t make sense. If all the X’s are the same then a 
.10 and a .20 IP address would be on the same /26 network.

I will assume that you do in fact have 2 x 26-bit networks, e.g.:

192.168.0.0/26 – with default gateway 192.168.0.1
192.168.0.64/26 – with default gateway 192.168.0.65

If your two guest networks have VRs on separate public IP ranges you will have 
e.g.

VR1: public IP 192.168.0.10
VR2: public IP 192.168.0.70

For a VM hosted behind VR1 to reach a service NAT’ed on VR2 you need to set up 
routing and possibly firewalling on the data centre device which handles the 
default gateway for the two networks – i.e. the top of rack switch or router 
which hosts default gateways  192.168.0.1 and 192.168.0.65. The fact that you 
can reach services on both networks from outside this range makes sense.

So once you have fixed this you will have VM1 > VR1 > DC_SWITCH_OR_ROUTER > VR2 
> VM2.


Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 21/02/2018, 12:27, "Andrei Mikhailovsky" <[email protected]> wrote:

    Hello 
    
    Could someone help me to identify the routing issues that we have. The 
problem is the traffic from different guest networks can not reach each other 
via the public IPs. 
    
    Here is my ACS setup: 
    ACS 4.9.3.0 (both management and agents) 
    KVM Hypervisor based on Ubuntu 16.04 
    Ceph as primary storage. NFS as secondary storage 
    Advanced Networking with vlan separation 
    2 x Public IP ranges with /26 netmask. 
    
    
    
    Here is an example when routing DOES NOT work: 
    
    Case 1 - Advanced Networking, vlan separation, VRs route all traffic and 
provide all networking services (dhcp, fw, port forwarding, load balancing, 
etc) 
    
    Guest Network 1: 
    
    Public IP: XXX.XXX.XXX.10/26 
    Private IP range: 10.1.1.0/24 
    guest vm1 IP: 10.1.1.100/24 
    
    Guest Network 2: 
    Public IP: XXX.XXX.XXX.20/26 
    Private IP range: 10.1.1.0/24 
    guest vm2 IP: 10.1.1.200/24 
    
    
    I've created ACLs on both guest networks to allow traffic from 0.0.0.0/0 on 
port 80. I've created the port forwarding rules to forward port 80 from public 
XXX.XXX.XXX.10 and XXX.XXX.XXX.XXX.20 onto 10.1.1.100 and 10.1.1.200 
respectively. 
    
    This setup works perfectly well when I am initiating the connections from 
outside of our CloudStack. However, vm2 can't reach vm1 on port 80 using the 
public IP XXX.XXX.XXX.10 and vice versa, vm1 can't reach vm2 on public IP 
XXX.XXX.XXX.20. 
    
    
    
    
    Here is an example when the routing DOES work: 
    
    Case 2 - Advanced Networking, vlan separation, VRs are not used. Public IPs 
are given directly to a guest vm 
    
    Guest Network 1: 
    
    guest vm1 Public IP: XXX.XXX.XXX.100/26 
    
    Guest Network 2: 
    
    guest vm2 Public IP: XXX.XXX.XXX.110/26 
    
    In the Case 2, the guest vm has a public IP address directly assigned to 
its network interface. VRs are not used for this networking. Each guest has a 
fw rule to allow incoming traffic on port 80 from 0.0.0.0/0. Both vm1 and vm2 
can access each other on port 80. Also, vms from Case 1 above can access port 
80 on vms from Case 2, similarly, vms from Case 2 can access port 80 on vms 
from Case 1. 
    
    
    
    So, it seems that the rules on the VR in Case 1 do not allow traffic that 
originates from other VRs within the same public network range. The trace route 
shows the last hop being the VR's private IP address. How do I change that 
behaviour and fix the networking issue? 
    
    Thanks 
    
    Andrei 
    


[email protected] 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 

Reply via email to