Are you aware of the online storage migration in 4.11.x ? :) Imagine migrating few hundreds volumes to another storage, like we did :)
Yet to be improved to support ceph/nfs to nfs/ceph - for now, afaik, it supports from non-managed to managed and vice versa.... Cheers On Thu, Oct 11, 2018, 16:08 Andrei Mikhailovsky <and...@arhont.com.invalid> wrote: > Thanks for your input and the explanations, gents. > > This is not really a big issue for me as we have a small scale environment > that doesn't require volume disk migration. And frankly speaking, the disk > migration using the manual method works far quicker than using the gui way > where the disk is probably first exported to the nfs secondary storage and > reimported back. > > But it is nice to see the work is being done to improve the migration > logic in the upcoming releases. > > Cheers > > ----- Original Message ----- > > From: "Andrija Panic" <andrija.pa...@gmail.com> > > To: "dev" <dev@cloudstack.apache.org> > > Sent: Thursday, 11 October, 2018 13:54:53 > > Subject: Re: Broken volume migration logic? > > > HI Rafael, Andrei, > > > > that sounds wonderful ! > > > > @Andrei , we had exactly the same situation, but we have done internal > code > > changes in ACS 4.5 /4.8 (never committed back to community > unfortunately... > > ), so after migration is done, and we want to change offering, the list > of > > Offerings is NOT matching the TAG of the volume only (so no error like > you > > still get) - the list of offerings is shown depending on the CURRENT POOL > > of the volume - we match the tags of any existing offerings vs tags on > the > > CURRENT POOL where volume exist - so only matching offerings (targeting > new > > pool...) are shown. > > > > (we had CEPH/NFS as soruce with "deprecated" tag and all ceph/nfs > > offerings deleted/inactive, and destination pool was SoldiFire with new > > storage tag and a set of Compute/Disk offerings with tag "solidfire") > > > > In our case this means - volume was on CEPH and had CEPH offering - > after > > we migrate offering to solidfire, only offering showing tag that matches > > the tags of the current pool (Solidfire), are shown... hope I was clearn > > with this long explanation :) > > > > For volumes specifically, storage tags are (to my knowledge) only > evaluated > > when you deploy VM (root volume) or create data volume - you can see this > > in logs when ACS search for pool having this and that tag... > > > > Once resource (volume) is DEPLOYED (exists), it works as it is (as Rafael > > explained), and Offerings are ignored for that matter - BUT interestingly > > enough - some properties (i.e. min/max iops aka storage QoS or KVM io > > throtling aka. hypervisor QoS) are inherited and copied over from > offering > > to actual volumes table/row in DB (for that specific volume...) when > volume > > is being created, etc - .while some properties like "cache_mode" > > (write-back or not) still read/applied on the fly from the actual > > Offering... so it's mix and match :) > > > > I might be able to provide code that did this new way of matching tags, > in > > case it would be interesting (but no human power to commit anything/PR, I > > can just share with Rafael or someone who is willing to push it upstream) > > Rafael ? > > > > > > Cheers > > > > > > > > > > > > On Thu, 11 Oct 2018 at 14:16, Rafael Weingärtner < > > rafaelweingart...@gmail.com> wrote: > > > >> What you described seems to be the new feature introduced with > >> https://issues.apache.org/jira/browse/CLOUDSTACK-10323 and > >> https://issues.apache.org/jira/browse/CLOUDSTACK-10240. However, this > >> feature should have been introduced only in master (4.12). I was not > able > >> to find those commits in 4.11.1.0 though. Maybe ACS was already allowing > >> the movement between shared storages with different tags?. Anyways, the > >> block of code used to do this process has been totally re-written (now > >> everything is unit-tested). It is only in 4.12 though… It will also > allow > >> placement overridden (ignoring storage tags and storage types), and > also it > >> will allow replacing the disk offering while migrating the disk to a > >> new/different storage system. > >> > >> To answer your questions. > >> > >> > My question is how did the vm start? Did cloudstack ignore the storage > >> > tags or is there another reason? > >> > > >> Once the volume is already placed somewhere, CloudStack any extra > checking > >> (if it can use the volume as is). Therefore, it only moves on with the > >> normal VM start. > >> > >> > >> On Thu, Oct 11, 2018 at 8:46 AM Andrei Mikhailovsky > >> <and...@arhont.com.invalid> wrote: > >> > >> > Hello, > >> > > >> > I have recently tried to migrate a volume from one rbd storage pool to > >> > another. Have noticed a possible issue with the migration logic, > which I > >> > was hoping to discuss with you. > >> > > >> > My setup: ACS 4.11.1.0 > >> > Ceph + rbd for two primary storage pools (hdd and ssd pools) > >> > Storage tags are used together with the Disk Offerings (rbd tag is > used > >> > for hdd backend volumes and rbd-ssd tag is used for the ssd backend > >> > volumes) > >> > > >> > What I tried to do: Move a single volume from hdd pool over to the ssd > >> > pool. Migration went well according to the cloudstack job result. I > ended > >> > up with a volume on the ssd storage pool. > >> > > >> > After the migration was done, I had a look at the disk service > offering > >> of > >> > the migrated volume and the service offering was still the hdd service > >> > offering despite the volume now being stored on the ssd pool. I have > >> tried > >> > to change the disk offering to the ssd pool and had an error saying > that > >> > the storage tags must be the same. Obviously, in my case, the storage > >> tags > >> > of the hdd and ssd pool offerings are different. I have checked the > >> > database and indeed, the db still has the hdd disk offering id. > >> > > >> > I have tried to start the vm and to my surprise the vm has started. > From > >> > my previous experience and my understanding how the tags work with > >> storage, > >> > the vm should not have started. The disk offering tag of the migrated > >> > volume points to the hdd storage where this volume doesn't exist. So, > >> > starting the vm should have errors out with an error like Insufficient > >> > resources or something like that. > >> > > >> > So, I have a bit of an inconsistency going on with that volume. > According > >> > to the cloudstack gui, the volume is stored on the ssd pool but has a > >> disk > >> > offering from the hdd pool and there is no way to change that from the > >> gui > >> > itself. > >> > > >> > > >> > My question is how did the vm start? Did cloudstack ignore the storage > >> > tags or is there another reason? > >> > > >> > Thanks > >> > > >> > >> > >> -- > >> Rafael Weingärtner > >> > > > > > > -- > > > > Andrija Panić >