I found a way to make this work. For whatever reason, destroying a VM - although it seems to mainly remove it from vCenter's inventory - leaves some info around that makes vCenter think the datastore is still in use.
Instead of destroying the VM, what I do for managed storage (when we are expunging the VM) is unregister it from vCenter. Once it is unregistered, I can remove the iSCSI connections from the ESX hosts and the datastore goes away. Destroying a VM is supposed to be similar: It removes the VM from vCenter's inventory and deletes all of the relevant files from the VM that are in the datastore in question. However, there must be a VMware bug where destroying the VM still leaves some trace in vCenter that tricks it into thinking the datastore is still in use due to this VM. In the case of managed storage, when you expunge the VM, I plan to delete the corresponding SAN volume via a storage plug-in, so it doesn't matter if there were files remaining on it from the VM or not. On Mon, Mar 31, 2014 at 8:34 PM, Mike Tutkowski < mike.tutkow...@solidfire.com> wrote: > Interesting...when I try to unmount the datastore prior to removing the > iSCSI connections, I get a similar error to when I straight-out try to > delete the datastore prior to removing the iSCSI connections: it says the > datastore is still in use. > > When I look at the contents of the datastore, there doesn't appear to be > anything on it. > > There is a clue in the fact that this is only a problem if a VM was > running off of this datastore. If the datastore was only used to house a > VMDK file for a data disk, there is no problem removing it by getting rid > of the iSCSI connections to the SAN volume from the hosts. > > > On Mon, Mar 31, 2014 at 4:44 PM, Mike Tutkowski < > mike.tutkow...@solidfire.com> wrote: > >> Thanks, Kelven...I had seen a similar article about Storage IO >> Control...perhaps the article you pointed to has more info. >> >> >> On Mon, Mar 31, 2014 at 4:38 PM, Kelven Yang <kelven.y...@citrix.com>wrote: >> >>> Would this KB article helpful? Particularly, it seems that Stroage IO >>> control needs to disabled before detaching the datastore. >>> >>> >>> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=di >>> splayKC&externalId=2004605<http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2004605> >>> >>> Kelven >>> >>> On 3/31/14, 3:14 PM, "Mike Tutkowski" <mike.tutkow...@solidfire.com> >>> wrote: >>> >>> >Interesting...I can look into that. Do you know off hand if we already >>> >have >>> >such a call to perform an unmount? >>> > >>> >Thanks, Kelven! >>> > >>> > >>> >On Mon, Mar 31, 2014 at 3:28 PM, Kelven Yang <kelven.y...@citrix.com> >>> >wrote: >>> > >>> >> >>> >> On 3/31/14, 1:54 PM, "Mike Tutkowski" <mike.tutkow...@solidfire.com> >>> >> wrote: >>> >> >>> >> >Hi Kelven, >>> >> > >>> >> >Thanks for the info! >>> >> > >>> >> >I have another question that perhaps you can answer. >>> >> > >>> >> >In my situation, with managed storage, I need to create and delete >>> >> >datastores dynamically. The idea is to have a single VM (and all of >>> its >>> >> >corresponding files) or a single VMDK data disk file per datastore in >>> >>some >>> >> >cases so we can guarantee IOPS to the VM or data disk. >>> >> > >>> >> >Each datastore is based on an iSCSI target that has guaranteed IOPS. >>> >> > >>> >> >For data disks, this process has worked perfectly (first implemented >>> in >>> >> >4.2). When I need the datastore, I create an iSCSI target on my SAN, >>> >>then >>> >> >establish a connection to it from each host in the VMware cluster, >>> then >>> >> >create a datastore on the target. >>> >> > >>> >> >When I no longer need the data disk, I remove the iSCSI targets from >>> >>the >>> >> >hosts and the datastore goes away. >>> >> > >>> >> >This same process works pretty well for root disks (and the other >>> >>files of >>> >> >a VM) except for when I want to delete the VM and get rid of its >>> >> >datastore. >>> >> >In this case, I follow the same process of removing the iSCSI >>> >>connections >>> >> >from each host in the cluster, but the datastore still shows up in >>> >>vCenter >>> >> >(albeit greyed out and in the inactive state when viewed through >>> >>vSphere >>> >> >Client). >>> >> > >>> >> >Any thoughts on this? I've looked into this on the web and the >>> general >>> >> >consensus is that the datastore is still somehow in use by vCenter. >>> Not >>> >> >sure why that would be, though. >>> >> >>> >> >>> >> Have you checked if the datastore is unmounted from all hosts within >>> the >>> >> cluster? When iSCSI target is added as a VMFS datastore, I believe all >>> >> hosts within the cluster will mount it automatically. To remove the >>> >> datastore from vCenter, you probably need to make sure the datastore >>> is >>> >> unmounted from all hosts. >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> > >>> >> >Thanks! >>> >> >Mike >>> >> > >>> >> > >>> >> >On Mon, Mar 31, 2014 at 2:45 PM, Kelven Yang <kelven.y...@citrix.com >>> > >>> >> >wrote: >>> >> > >>> >> >> >>> >> >> >>> >> >> On 3/29/14, 7:31 PM, "Sateesh Chodapuneedi" >>> >> >> <sateesh.chodapune...@citrix.com> wrote: >>> >> >> >>> >> >> >> -----Original Message----- >>> >> >> >> From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com] >>> >> >> >> Sent: 30 March 2014 00:06 >>> >> >> >> To: dev@cloudstack.apache.org >>> >> >> >> Subject: [QUESTION] VMware ServerResource >>> >> >> >> >>> >> >> >> Hi, >>> >> >> >> >>> >> >> >> Quick question: >>> >> >> >> >>> >> >> >> For VMware, since we have vCenter Server in the mix as opposed >>> to >>> >> >>just >>> >> >> >> ESX(i) hosts, I was wondering how that works out with our >>> related >>> >> >> >>ServerResources. >>> >> >> >> >>> >> >> >> For example, if you have a cluster with three ESX hosts, does >>> that >>> >> >> >>equate to three ServerResources running on the management server? >>> >> >> >Yes, each host is tracked by a server resource. CloudStack >>> retrieves >>> >> >> >owning cluster/datacenter as required from vCenter and performs >>> >> >>required >>> >> >> >operations. >>> >> >> > >>> >> >> >> >>> >> >> >> Assuming that leads to three ServerResources in that situation, >>> if >>> >> >>you >>> >> >> >>have multiple management servers for your cloud, do all three of >>> >> >> >> these ServerResources have to be managed by a single management >>> >> >>server >>> >> >> >>(because their resources are in the same cluster)? >>> >> >> >I think it is not required to be managed by a single management >>> >>server. >>> >> >> >>> >> >> >>> >> >> Yes, it is not required to be managed by a single management >>> server. >>> >>One >>> >> >> thing to note that, all resource instances are now sharing a pool >>> of >>> >> >> vCenter sessions, an instance of such vCenter session is acquired >>> and >>> >> >> released by server resource when it needs to perform operations to >>> >> >>vCenter. >>> >> >> >>> >> >> >>> >> >> > >>> >> >> >> >>> >> >> >> Thanks! >>> >> >> >> >>> >> >> >> -- >>> >> >> >> *Mike Tutkowski* >>> >> >> >> *Senior CloudStack Developer, SolidFire Inc.* >>> >> >> >> e: mike.tutkow...@solidfire.com >>> >> >> >> o: 303.746.7302 >>> >> >> >> Advancing the way the world uses the >>> >> >> >> cloud<http://solidfire.com/solution/overview/?video=play> >>> >> >> >> *(tm)* >>> >> >> >>> >> >> >>> >> > >>> >> > >>> >> >-- >>> >> >*Mike Tutkowski* >>> >> >*Senior CloudStack Developer, SolidFire Inc.* >>> >> >e: mike.tutkow...@solidfire.com >>> >> >o: 303.746.7302 >>> >> >Advancing the way the world uses the >>> >> >cloud<http://solidfire.com/solution/overview/?video=play> >>> >> >*(tm)* >>> >> >>> >> >>> > >>> > >>> >-- >>> >*Mike Tutkowski* >>> >*Senior CloudStack Developer, SolidFire Inc.* >>> >e: mike.tutkow...@solidfire.com >>> >o: 303.746.7302 >>> >Advancing the way the world uses the >>> >cloud<http://solidfire.com/solution/overview/?video=play> >>> >*(tm)* >>> >>> >> >> >> -- >> *Mike Tutkowski* >> *Senior CloudStack Developer, SolidFire Inc.* >> e: mike.tutkow...@solidfire.com >> o: 303.746.7302 >> Advancing the way the world uses the >> cloud<http://solidfire.com/solution/overview/?video=play> >> *(tm)* >> > > > > -- > *Mike Tutkowski* > *Senior CloudStack Developer, SolidFire Inc.* > e: mike.tutkow...@solidfire.com > o: 303.746.7302 > Advancing the way the world uses the > cloud<http://solidfire.com/solution/overview/?video=play> > *(tm)* > -- *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkow...@solidfire.com o: 303.746.7302 Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play> *(tm)*