On 04/25/2016 11:33 PM, Jaison Peter wrote:

I have many concerns about the scaling and right choices , since
openstack is offering lot of choices and flexibility, especially in
networking side.Our major challenge was choosing between simplicity and
performance offered by Linux bridge and features and DVR offered by
OVS.  We decided to go with OVS, though some were suggesting like OVS is
slow in large deployments. But the distributed L3 agents and bandwidth
offered by DVR inclined us towards OVS. Is it a better decision?

But one of the major drawback we are seeing with DVR is the public IP
consumption.If we have 100 clients and 1 VM per client , eventually
there will be 100 tenants and 100 routers. Since its a public cloud, we
have to offer public IP for each VM. In DVR mode, fip name space in
compute will be consuming one public IP and if 100 VMs are running among
20 computes, then total 20 public IPs will be used among computes. And a
router SNAT name space will be created for each tenant router(Total
100)  and each of it will be consuming 1 public  IP and so total 100
public IPs will be consumed by central SNAT name spaces. So total 100 +
20 = 120 public IPs will be used by openstack components and  100 will
be used as floating IPs (1:1 NAT) by VMs. So we need 220 public IPs for
providing dedicated public IPs for 100 VMs !! Anything wrong with our
calculation?

Have you also looked at the namespaces created on the compute nodes and the IP addresses they get?

 From our point of  view 120 IPs used by openstack components in our
case (providing 1:1 NAT for every VM) is wastage of IPs and no any role
in network traffic. Centrallized SNAT is useful , if the client is
opting for VPC like in AWS and he is not attaching floating IPs to all
instances in his VPC.

So is there any option while creating DVR router to avoid creating
central SNAT name space in controller node ? So that we can save 100
public IPs in the above scenario.

That certainly would be nice to be able to do.

DVR certainly does help with instance to instance communication scaling - not having to go through the CVR network node(s) is a huge win for scaling as far as aggregate network performance is concerned. But if instances are not likely to speak much with one another via floating IPs - say because instances on different private networks aren't communicating with one another and instances on the same private networks can speak with one another via their private IPs then that scaling doesn't really matter. Also if those instances will not be communicating (much) with other services in your cloud - Swift comes to mind. So too if all the instances will spend most of their networktime speaking with the Big Bad Internet (tm) in which case your cloud's connection to the Big Bad Internet will be what gates the aggregate. In all those scenarios you may be just as well-off with CVR.

In terms of other things relating to scale, and drifting a bit from DVR, if enable_isolated_metadata is set to true (not sure if that is default or not) then there will be two metadata proxies launched for each network, in addition to the two metadata proxies launched per router, and the metadata proxy launched on each compute node on which an instance behind a given DVR router resides. The latter isn't that big a deal for scaling, but the former can be. Each metadata proxy will want ~40 MB of RSS, so that is 160 MB of RSS spread across your neutron network nodes. Added to that will be another 10 MB of RSS for a pair of dnsmasq processes.

Getting back to DVR and bringing OVS along for the ride, one other "cost" is that one still must have Linux bridge in the path to implement the security group rules. Depending on the sort of netperf benchmark one runs, that "costs" between 5% and 45% performance. That was measured by setting-up the instances, taking the baseline, and then manually "rewiring" bypassing the linux bridge.

happy benchmarking,

rick jones

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to