Hi Bharat, Regarding the case of contention , you mentioned that hypervisor specific methods would come into effect . I had a small query regarding this .
Imagine , if I would place more VMs on a host, to the point where the use of physical RAM is higher than the amount of physical RAM present in the host , then as per your explanation , the hypervisor will start some memory optimization and reclaim techniques, but in the end it will swap host memory to disk, which isn't a bad thing ? Wouldn't it have any performance issues ? Thanks, Pranav -----Original Message----- From: Bharat Kumar [mailto:bharat.ku...@citrix.com] Sent: Wednesday, December 26, 2012 6:09 PM To: cloudstack-dev@incubator.apache.org; cloudstack-us...@incubator.apache.org Subject: Re: [Discuss] Cpu and Ram overcommit. Nitin thanks for your suggestions. My comments inline On Dec 26, 2012, at 3:22 PM, Nitin Mehta <nitin.me...@citrix.com> wrote: > Thanks Bharat for the bringing this up. > I have a few questions and suggestions for you. > > 1. Why do we need it per cluster basis and when and where do you configure > this ? I hope when we change it for a cluster it would not require MS reboot > and be dynamically understood - is that the case ? Depending on the applications running in a given cluster the admin needs to adjust the over commit factor. for example if the applications running in a cluster are ram intensive he may want to decrease the ram overcommit ratio for this cluster without effecting the other clusters. This can be done only if the ratios can be specified on a per cluster basis. Also to change these ratios MS restart will not be required. > If we make it cluster based allocators will have to check this config for > each cluster while allocating and can potentially make allocators expensive. > Same logic applies for dashboard calculation as well. > What granularity and fine tuning do we require - do you have any use cases ? The intent of having cluster based over provisioning ratios is to deploy VMs selectively depending on the type of application the vm will run. By selectively i mean the admin will want to specify in which clusters to run the VM. This will narrow down the number of clusters we need to check while deploying. I still don't know the exact way in which we should control the vm deployment. This definitely needs further discussion, will be clear once we narrow down all the possible use cases. > 2. What would happen in case of contention ? In case of contention the the hypervisor specific methods to handle the contention will come into effect. This feature assumes that admin has thought of the possible scenarios and has chosen the overcommit ratios accordingly. > > 3. Please remember to take care of alerts and dashboard related > functionality. Along with this also list Zone/Pod.../host/pool API also use > this factor. Please make sure that you take care of that as well. Thanks for the suggestions. > > -Nitin > > On 26-Dec-2012, at 11:32 AM, Bharat Kumar wrote: > >> Hi all, >> >> Presently in Cloudstack there is a provision for cpu overcommit and no >> provision for the ram overcommit. There is no way to configure the >> overcommit ratios on a per cluster basis. >> >> So we propose to add a new feature to allow the ram overcommit and to >> specify the overcommit ratios ( cpu/ram ) on a per cluster basis. >> >> Motivation to add the feature: >> Most of the operating systems and applications do not use the allocated >> resources to 100%. This makes it possible to allocate more resource than >> what is actually available. The overcommitting of resources allows to run >> the underutilized VMs in fewer number of hosts, This saves money and power. >> Currently the cpu overcommit ratio is a global parameter which means there >> is no way to fine tune or have a granular control over the overcommit >> ratios. >> >> This feature will enable >> 1.) Configuring the overcommit ratios on a per cluster basis. >> 2.) ram overcommit feature in xen and kvm. ( It is there for VMware.) >> 3.) Updating the overcommit ratios of a cluster. >> >> Regards, >> Bharat Kumar. >