I was able to manually fix this in the db:

update volumes set state = 'Ready' where 
uuid='8987c39d-c182-4549-8e30-f06c9e9bdbba’;

After this, the vm booted again. I noticed a similar post that mentioned 
increasing job.cancel.threshold.minutes beyond 60 minutes, which is how I 
encountered this issue in the first place. Maybe this will allow me to complete 
the volume migration from nfs -> ceph.

Thanks
-jeremy

> On Sunday, Dec 19, 2021 at 10:53 PM, Jeremy Hansen <[email protected] 
> (mailto:[email protected])> wrote:
> Since the ceph image was stuck in “Creating” state, I just removed the 
> volume. Immediately after removing the volume, I noticed “Migrating” pop up 
> in the volumes menu for NFS:
>
> http://www.skidrowstudios.com/ss.png
>
> Any clue how I can put this back together?
>
> Thanks
> -jeremy
>
>
>
>
> > On Sunday, Dec 19, 2021 at 5:07 AM, Jeremy Hansen <[email protected] 
> > (mailto:[email protected])> wrote:
> > I was attempting to migrate a root filesystem from NFS to Ceph. During the 
> > process, Cloudstack came back and told me the process took too long and it 
> > was canceling the job. This put the state of the filesystem in limbo as the 
> > there no longer the NFS filesystem and the Ceph image is stuck in 
> > “Creating”.
> >
> > I was able to export the image from Ceph using "rbd export 
> > --pool=cloudstack 31c8d8d5-9dde-4512-ab1e-dcce8dbaf6f3 rootfs.img”.
> >
> > I’m able to mount the /boot filesystem on this image using proper offsets, 
> > so this gives me the indication that this image is probably healthy, but 
> > how do I get this back in to Cloudstack and how do I tell the VM to use 
> > this new image for its root filesystem? This image has an LVM partition and 
> > it needs to boot in conjunction with the additional storage I provisioned 
> > for this instance which makes up the lvm volume being used.
> >
> > I have the image, I just need to get it back in to cloudstack and I need 
> > the instance config to use this new image. Or may be even simpler, how to I 
> > re-establish the relationship with the image which now exists in Ceph but 
> > has no mapping within Cloudstack since the job bailed in the middle?
> >
> > This is Cloudstack 4.16.0.0 and Ceph Pacific 16.2.4.
> >
> > Thanks
> > -jeremy
> >
> >
> >
> >
>
>
>

Attachment: signature.asc
Description: PGP signature

Reply via email to