On Wed, Aug 19, 2015 at 7:48 PM, <taylor.ber...@solnet.co.nz> wrote: > Hi everyone, > > Apologises for the duplicate send, looks like my mail client doesn't > create very clean HTML messages. Here is the message in plain-text. I'll > make sure to send to the list in plain-text from now on. > > In my current pre-production deployment we were looking for a method to > live extend attached volumes to an instance. This was one of the > requirements for deployment. I've worked with libvirt hypervisors before so > it didn't take long to find a workable solution. However I'm not sure how > transferable this will be across deployment models. Our deployment model is > using libvirt for nova and ceph for backend storage. This means obviously > libvirt is using rdb to connect to volumes. > > Currently the method I use is: > > - Force cinder to run an extend operation. > - Tell Libvirt that the attached disk has been extended. > > It would be worth discussing if this can be ported to upstream such that > the API can handle the leg work, rather than this current manual method. > > Detailed instructions. > You will need: volume-id of volume you want to resize, hypervisor_hostname > and instance_name from instance volume is attached to. > > Example: extending volume f9fa66ab-b29a-40f6-b4f4-e9c64a155738 attached to > instance-00000012 on node-6 to 100GB > > $ cinder reset-state --state available f9fa66ab-b29a-40f6-b4f4-e9c64a155738 > $ cinder extend f9fa66ab-b29a-40f6-b4f4-e9c64a155738 100 > $ cinder reset-state --state in-use f9fa66ab-b29a-40f6-b4f4-e9c64a155738 > > $ssh node-6 > node-6$ virsh qemu-monitor-command instance-00000012 --hmp "info block" | > grep f9fa66ab-b29a-40f6-b4f4-e9c64a155738 > drive-virtio-disk1: removable=0 io-status=ok > file=rbd:volumes-slow/volume-f9fa66ab-b29a-40f6-b4f4-e9c64a155738:id=cinder:key=<keyhere>==:auth_supported=cephx\\;none:mon_host=10.1.226.64\\:6789\\;10.1.226.65\\:6789\\;10.1.226.66\\:6789 > ro=0 drv=raw encrypted=0 bps=0 bps_rd=0 bps_wr=0 iops=0 iops_rd=0 iops_wr=0 > > This will get you the disk-id, which in this case is drive-virtio-disk1. > > node-6$ virsh qemu-monitor-command instance-00000012 --hmp "block_resize > drive-virtio-disk1 100G" > > Finally, you need to perform a drive rescan on the actual instance and > resize and extend the file-system. This will be OS specific. > > I've tested this a few times and it seems very reliable. > > Taylor Bertie > Enterprise Support Infrastructure Engineer > > Mobile +64 27 952 3949 > Phone +64 4 462 5030 > Email taylor.ber...@solnet.co.nz > > Solnet Solutions Limited > Level 12, Solnet House > 70 The Terrace, Wellington 6011 > PO Box 397, Wellington 6140 > > www.solnet.co.nz > > Attention: > This email may contain information intended for the sole use of > the original recipient. Please respect this when sharing or > disclosing this email's contents with any third party. If you > believe you have received this email in error, please delete it > and notify the sender or postmas...@solnetsolutions.co.nz as > soon as possible. The content of this email does not necessarily > reflect the views of Solnet Solutions Ltd. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >
Hey Taylor, This is something that has come up a number of times but I personally didn't have a good solution for it on the iSCSI side. I'm not sure if your method would work with iSCSI attached devices because typically you need to detach/reattach for size changes to take effect, in other words I'm uncertain if libvirt would be able to see the changes. That being said I also didn't know about this option in libvirt so it may work out. I'll let the Nova folks reply regarding interest from their side in the M release, but I know from the Cinder side and the Trove side this would in fact be a desireable feature. Appreciate the detailed write up, and again WELCOME!! Thanks, John
__________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev