fully very small change in
drivers to support it. Technically I don't see it as an issue.
However, is it a change we'd be willing to accept? Is there any good
reason not to do this? Are there any less esoteric workflows which
might use this feature?
Matt
--
Matthew Booth
Red Hat OpenSta
ependent before doing this, at which
point the volume itself should be migratable?
If we can establish that there's an acceptable alternative to calling
volume-update directly for all use-cases we're aware of, I'm going to
propose heading off this class of bug by disabling it for non-cinder
server is rebuilt, and the volume is not
deleted. The user will still lose their data, of course, but that's implied
by the rebuild they explicitly requested. The volume id will remain the
same.
[1] I suspect this would require new functionality in cinder to
re-initialize from image.
Matt
--
that
nobody will be sad if I remove all traces of it in its current form. If
anybody is using it, or knows of anybody using it, could you let me know?
What workarounds are you using?
Secondly, if it did work is anybody interested in this feature?
Thanks,
Matt
--
Matthew Booth
Red Hat Engine
diffstat:
nova/tests/unit/virt/libvirt/test_imagecache.py | 265
++--
nova/virt/libvirt/imagecache.py | 211 +--
2 files changed, 23 insertions(+), 453 deletions(-)
Happy Wednesday :)
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Pho
On Tue, May 24, 2016 at 11:06 AM, John Garbutt wrote:
> On 24 May 2016 at 10:16, Matthew Booth wrote:
> > During its periodic task, ImageCacheManager does a checksum of every
> image
> > in the cache. It verifies this checksum against a previously stored
> value,
> >
dy be sad if I deleted it?
Matt
[1] Incidentally, there also seems to be a bug in this implementation, in
that it doesn't hold the lock on the image itself at any point during the
hashing process, meaning that it cannot guarantee that the image has
finished downloading yet.
--
Matthe