On 20/05/14 12:17, Jay Pipes wrote:
Hi Zane, sorry for the delayed response. Comments inline.
On 05/06/2014 09:09 PM, Zane Bitter wrote:
On 05/05/14 13:40, Solly Ross wrote:
One thing that I was discussing with @jaypipes and @dansmith over
on IRC was the possibility of breaking flavors down into separate
components -- i.e have a disk flavor, a CPU flavor, and a RAM flavor.
This way, you still get the control of the size of your building blocks
(e.g. you could restrict RAM to only 2GB, 4GB, or 16GB), but you avoid
exponential flavor explosion by separating out the axes.
Dimitry and I have discussed this on IRC already (no-one changed their
mind about anything as a result), but I just wanted to note here that I
think even this idea is crazy.
VMs are not allocated out of a vast global pool of resources. They're
allocated on actual machines that have physical hardware costing real
money in fixed ratios.
Here's a (very contrived) example. Say your standard compute node can
support 16 VCPUs and 64GB of RAM. You can sell a bunch of flavours:
maybe 1 VCPU + 4GB, 2 VCPU + 8GB, 4 VCPU + 16GB... &c. But if (as an
extreme example) you sell a server with 1 VCPU and 64GB of RAM you have
a big problem: 15 VCPUs that nobody has paid for and you can't sell.
(Disks add a new dimension of wrongness to the problem.)
You are assuming a public cloud provider use case above. As much as I
tend to focus on the utility cloud model, where the incentives are
around maximizing the usage of physical hardware by packing in as many
paying tenants into a fixed resource, this is only one domain for
OpenStack.
I was assuming the use case advanced in this thread, which sounded like
a semi-public cloud model.
However, I'm actually trying to argue from a higher level of abstraction
here. In any situation where there are limited resources, optimal
allocation of those resources will occur when the incentives of the
suppliers and consumers of said resources are aligned, independently of
whose definition of "optimal" you use. This applies equally to public
clouds, private clouds, lemonade stands, and the proverbial two guys
stranded on a desert island. In other words, it's an immutable property
of economies, not anything specific to one use case.
There are, for good or bad, IT shops and telcos that frankly are willing
to dump money into an inordinate amount of hardware -- and see that
hardware be inefficiently used -- in order to appease the demands of
their application customer tenants. The impulse of onboarding teams for
these private cloud systems is to "just say yes", with utter disregard
to the overall cost efficiency of the proposed customer use cases.
Fine, but what I'm saying is that you can just give the customer _more_
than they really wanted (i.e. round up to the nearest flavour). You can
charge them the same if you want - you can even decouple pricing from
the flavour altogether if you want. But what you can't do is assume
that, just because you gave the customer exactly what they needed and
not one kilobyte more, you still get to use/sell the excess capacity you
didn't allocate to them. Because you may not.
If there was a simple switching mechanism that allowed a deployer to
turn on or off this ability to allow tenants to construct specialized
instance type configurations, then who really loses here? Public or
utility cloud providers would simply leave the switch to its default of
"off" and folks who wanted to provide this functionality to their users
could provide it. Of course, there are clear caveats around lack of
portability to other clouds -- but let's face it, cross-cloud
portability has other challenges beyond this particular point ;)
The insight of flavours, which is fundamental to the whole concept of
IaaS, is that users must pay the *opportunity cost* of their resource
usage. If you allow users to opt, at their own convenience, to pay only
the actual cost of the resources they use regardless of the opportunity
cost to you, then your incentives are no longer aligned with your
customers.
Again, the above assumes a utility cloud model. Sadly, that isn't the
only cloud model.
The only assumption is that resources are not (effectively) unlimited.
You'll initially be very popular with the kind of customers
who are taking advantage of you, but you'll have to hike prices across
the board to make up the cost leading to a sort of dead-sea effect. A
Gresham's Law of the cloud, if you will, where bad customers drive out
good customers.
Simply put, a cloud allowing users to define their own flavours *loses*
to one with predefined flavours 10 times out of 10.
In the above example, you just tell the customer: bad luck, you want
64GB of RAM, you buy 16 VCPUs whether you want them or not. It can't
actually hurt to get _more_ than you wanted, even though you'd rather
not pay for it (provided, of course, that everyone else *is* paying for
it, and cross-subsidising you... which they won't).
Now, it's not the OpenStack project's job to prevent operators from
going bankrupt. But I think at the point where we are adding significant
complexity to the project just to enable people to confirm the
effectiveness of a very obviously infallible strategy for losing large
amounts of money, it's time to draw a line.
Actually, we're not proposing something more complex, IMO.
What I've been discussing on IRC and other places is getting rid of the
concept of flavours entirely except for in user interfaces, as an easy
way of templatizing the creation of instances. Once an instance is
launched, I've proposed that we don't store the instance_type_id with
the instance any more. Right now, we store the memory, CPU, and root
disk amounts in the instances table, so besides the instance_type
extra_specs information, there is currently no need to keep the concept
of an instance_type around after the instance launch sequence has been
initiated. The instance_type is decomposed into its resource units and
those resource units are used for scheduling decisions, not the flavour
itself. In this way, an instance_type is nothing more than a UI template
to make instance creation a bit easier.
I'm not familiar with the Nova code at all, so if you think that this is
actually going to be _easier_ to maintain (and the cost of implementing
it is justified on that basis alone), then you should absolutely go
ahead. I agree, as you said above, that no-one loses in that case
(unless they choose to).
(FWIW the idea of imposing constraints on which combinations are allowed
together did sound pretty complicated to me, compared to the alternative
of the operator just listing the combinations they want to allow as
flavors.)
The problem to date is that the introduction of the extra_specs stuff
was done at the flavour level. Improperly, IMO. The reason is because
the extra_specs don't represent quantitative resources, like the other
the attributes of a flavour do, but qualitative resource descriptions
(i.e. it's a "NUMA system", not "it has 4 CPUs"). This mismatch between
quantitative and qualitative is at the root of the problems we are
currently seeing with a number of proposed blueprints, from the
extensible resource tracker, to the problems inherent to the quota
management system, and all of the PCI-related blueprints.
Yeah, that sounds like it could use cleaning up.
By the way, the whole theory behind this idea seems to be that this:
nova server-create --cpu-flavor=4 --ram-flavour=16G --disk-flavor=200G
minimises the cognitive load on the user, whereas this:
nova server-create --flavor=4-64G-200G
will cause the user's brain to explode from its combinatorial
complexity. I find this theory absurd.
That isn't the theory at all; in fact, nobody has been talking about
cognitive dissonance at the UI level.
I think two threads of discussion got from conversations in different
IRC channels got merged onto the list here - the one I joined from did
actually begin with people saying that it would be too confusing to have
a long list of pre-generated flavors, and ended with the same people
saying that separating the components out solved the problem ;)
It sounds like you're saying that you were already considering this for
unrelated reasons, and if that's the case then you should feel free to
ignore me. I'm not suggesting that I see a reason to *not* do it, only
that I don't see a reason *to* do it (and not for want of looking at the
proffered use cases).
cheers,
Zane.
Best,
-jay
In other words, if you really want to lose some money, it's perfectly
feasible with the existing flavour implementation. The operator is only
ever 3 for-loops away from setting up every combination of flavours
possible from combining the CPU, RAM and disk options, and can even
apply whatever constraints they desire.
All that said, Heat will expose any API that Nova implements. Choose
wisely.
cheers,
Zane.
_______________________________________________
OpenStack-dev mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
_______________________________________________
OpenStack-dev mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
_______________________________________________
OpenStack-dev mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev