This is what we have done as well.

We made our flavors stackable, starting with our average deployed flavor size 
and making things a multiple of that.  IE if our average deployed flavor size 
is 8GB 120GB of disk, our larger flavors are multiple of that.  So if 16GB 
240GB of disk is the average, the next flavor up maybe: 32GB 480GB of disk.  
From there its easy to then say with 256GB of ram we will average:  ~30 VM’s 
which means we need to have ~3.6TB of local storage per node.  Assuming that 
you don’t over allocate disk or ram.  In practice though you can get a running 
average of the amount of disk space consumed and work towards that plus a bit 
of a buffer and run with a disk oversubscription.

We currently have no desire to remove local storage.  We want the root disks to 
be on local storage.  That being said in the future we will most likely give 
smaller root disks and if people need more space ask them to provisioning a rbd 
volume through cinder.

___________________________________________________________________
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

From: "Edmund Rhudy (BLOOMBERG/ 120 PARK)" <erh...@bloomberg.net>
Reply-To: Edmund Rhudy <erh...@bloomberg.net>
Date: Thursday, November 10, 2016 at 8:47 AM
To: "war...@wangspeed.com" <war...@wangspeed.com>, "rovanleeu...@ebay.com" 
<rovanleeu...@ebay.com>
Cc: "openstack-operators@lists.openstack.org" 
<openstack-operators@lists.openstack.org>
Subject: Re: [Openstack-operators] Managing quota for Nova local storage?

We didn't come up with one. RAM on our HVs is the limiting factor since we 
don't run with memory overcommit, so the ability of people to run an HV out of 
disk space ended up being moot. ¯\_(ツ)_/¯

Long term we would like to switch to being exclusively RBD-backed and get rid 
of local storage entirely, but that is Distant Future at best.

From: rovanleeu...@ebay.com
Subject: Re: [Openstack-operators] Managing quota for Nova local storage?
Hi,

Found this thread in the archive so a bit of a late reaction.
We are hitting the same thing so I created a blueprint:
https://blueprints.launchpad.net/nova/+spec/nova-local-storage-quota

If you guys already found a nice solution to this problem I’d like to hear it :)

Robert van Leeuwen
eBay - ECG

From: Warren Wang <war...@wangspeed.com>
Date: Wednesday, February 17, 2016 at 8:00 PM
To: Ned Rhudy <erh...@bloomberg.net>
Cc: "openstack-operators@lists.openstack.org" 
<openstack-operators@lists.openstack.org>
Subject: Re: [Openstack-operators] Managing quota for Nova local storage?

We are in the same boat. Can't get rid of ephemeral for it's speed, and 
independence. I get it, but it makes management of all these tiny pools a 
scheduling and capacity nightmare.
Warren @ Walmart

On Wed, Feb 17, 2016 at 1:50 PM, Ned Rhudy (BLOOMBERG/ 731 LEX) 
<erh...@bloomberg.net<mailto:erh...@bloomberg.net>> wrote:
The subject says it all - does anyone know of a method by which quota can be 
enforced on storage provisioned via Nova rather than Cinder? Googling around 
appears to indicate that this is not possible out of the box (e.g., 
https://ask.openstack.org/en/question/8518/disk-quota-for-projects/).

The rationale is we offer two types of storage, RBD that goes via Cinder and 
LVM that goes directly via the libvirt driver in Nova. Users know they can 
escape the constraints of their volume quotas by using the LVM-backed 
instances, which were designed to provide a fast-but-unreliable RAID 0-backed 
alternative to slower-but-reliable RBD volumes. Eventually users will hit their 
max quota in some other dimension (CPU or memory), but we'd like to be able to 
limit based directly on how much local storage is used in a tenancy.

Does anyone have a solution they've already built to handle this scenario? We 
have a few ideas already for things we could do, but maybe somebody's already 
come up with something. (Social engineering on our user base by occasionally 
destroying a random RAID 0 to remind people of their unsafety, while tempting, 
is probably not a viable candidate solution.)

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org<mailto:OpenStack-operators@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to