On 8/13/2018 4:42 PM, melanie witt wrote:
From what I find in the PTG notes [1] and the spec, it looks like this
didn't go forward for lack of general interest. We have a lot of work to
review every cycle and we generally focus on functionality that impact
operators the most and look for +1s on specs from operators who are
interested in the features. From what I can tell from the
comments/votes, there isn't much/any operator interest about live-resize.
As has been mentioned, resize down is hypervisor-specific whether or not
it's supported. For example, in the libvirt driver, resize down of
ephemeral disk is not allowed at all and resize down of root disk is
only allowed if the instance is boot-from-volume [2]. The xenapi driver
disallows resize down of ephemeral disk [3], the vmware driver disallows
resize down of root disk [4], the hyperv driver disallows resize down of
root disk [5].
So, allowing only live-resize up would be a way to behave consistently
across virt drivers.
Somewhat related to this, but some feedback I got from our product teams
this last week was they'd like to see the duplicate resource allocations
during (cold) resize to same host fixed. Since Queens the migration
record has the old flavor allocations and the instance holds the new
flavor allocations, but the same-host compute node resource provider
still has allocations from both during the resize, which might take it
out of scheduling contention even though we only need to count the max()
of any values between the old/new flavors. Our public cloud is very keen
on maximizing efficient usage of hosts (packing) for cost reasons
(obviously, and this is common) but this isn't just a public cloud cost
savings thing. It's also an issue for, are you ready for this?
**EDGE!!!** Simply because you could have one or two compute hosts at a
site and can't afford the duplicate resource allocations in that case
for a resize. Anyway, it's somewhat tangential to the live resize stuff,
but it's an added complication in existing functionality that we should
fix, and Kevin/Yikun/myself (one of us) plan on working on that in Stein.
--
Thanks,
Matt
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev