Hi Chris, On 17 Mar. 2017 15:24, "Chris Friesen" <chris.frie...@windriver.com> wrote:
On 03/16/2017 07:06 PM, Blair Bethwaite wrote: Statement: breaks bin packing / have to match flavor dimensions to hardware > dimensions. > Comment: neither of these ring true to me given that most operators tend to > agree that memory is there first constraining resource dimension and it is > difficult to achieve high CPU utilisation before memory is exhausted. Plus > virtualisation is inherently about resource sharing and over-provisioning, > Ah whoops, that was meant to be: "resource sharing and (optionally) over-provisioning", where I was broadly including security and convenience under resource sharing. There are of course many other factors. unless you have very detailed knowledge of your workloads a priori (or some > cycle-stealing/back-filling mechanism) you will always have > under-utilisation > (possibly quite high on average) in some resource dimensions. > I think this would be highly dependent on the workload. A virtual router is going to run out of CPU/network bandwidth far before memory is exhausted. Absolutely, which is why I hinted at today's general IaaS workload and said, "...unless you have very detailed knowledge of your workloads a priori". NFV focused clouds would clearly look quite different, and I suppose with the rise of OpenStack at telcos there would be quite a few such deployments floating around now. For similar reasons I'd disagree that virtualization is inherently about over-provisioning and suggest that (in some cases at least) it's more about flexibility over time. Our customers generally care about maximizing performance and so nothing is over-provisioned...disk, NICs, CPUs, RAM are generally all exclusively allocated. Sure, in our three Nova Cells we have a big mix of workload. One Cell is HPC oriented and so has no over-provisioning. Another is performance oriented (fast cores, fast network) but still moderately over-provisioned (cgroups to manage resource share between flavors). The other is general purpose. For me an interesting question to know the answer to here would be at what point you have to stop resource sharing to guarantee your performance promises/SLAs (disregarding memory over-provisioning). My gut says that unless you are also doing all the other high-end performance tuning (CPU & memory pinning, NUMA topology, hugepages, optimised networking such as macvtap or SRIOV, plus all the regular host-side system/BIOS and power settings) you'll see very little benefit, i.e., under-provisioning on its own is not a performance win. Cheers, Blair
_______________________________________________ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators