Hello, 

I have recently tried to migrate a volume from one rbd storage pool to another. 
Have noticed a possible issue with the migration logic, which I was hoping to 
discuss with you. 

My setup: ACS 4.11.1.0 
Ceph + rbd for two primary storage pools (hdd and ssd pools) 
Storage tags are used together with the Disk Offerings (rbd tag is used for hdd 
backend volumes and rbd-ssd tag is used for the ssd backend volumes) 

What I tried to do: Move a single volume from hdd pool over to the ssd pool. 
Migration went well according to the cloudstack job result. I ended up with a 
volume on the ssd storage pool. 

After the migration was done, I had a look at the disk service offering of the 
migrated volume and the service offering was still the hdd service offering 
despite the volume now being stored on the ssd pool. I have tried to change the 
disk offering to the ssd pool and had an error saying that the storage tags 
must be the same. Obviously, in my case, the storage tags of the hdd and ssd 
pool offerings are different. I have checked the database and indeed, the db 
still has the hdd disk offering id. 

I have tried to start the vm and to my surprise the vm has started. From my 
previous experience and my understanding how the tags work with storage, the vm 
should not have started. The disk offering tag of the migrated volume points to 
the hdd storage where this volume doesn't exist. So, starting the vm should 
have errors out with an error like Insufficient resources or something like 
that. 

So, I have a bit of an inconsistency going on with that volume. According to 
the cloudstack gui, the volume is stored on the ssd pool but has a disk 
offering from the hdd pool and there is no way to change that from the gui 
itself. 


My question is how did the vm start? Did cloudstack ignore the storage tags or 
is there another reason? 

Thanks 

Reply via email to