Right,

so I managed to reproduce your issues, 2 times, using ACS 4.13/master and
VMware 6.5... got the same error message, after a lot of tasks being
executed on VMware side.

The steps I take were to (first try) don't even add the iSCSI Sotfware
adapter, but then when trying to spin a VM, there is obviously no IQN
identifier assigned to the ESXi hosts, and thus not part of the DB (hosts
table) - so, as expected, you have added the SF to ACS (SolidFire plugin,
Managed, proper URL defined), but it fails to spin a VM.

Clean up everything in ACS, wipe the Primary Storage. Add iSCSI Software
adapter in vCenter/ESXi hosts, configure proper binding to vSwitchXXX and
then add again SF (again, SolidFire provider, Managed and proper ULR) to
SF, try to spin a VM.
Now, a series of things are happening in vCenter.

-  Static iSCSI target is added to ESXI hosts,
- Rescanning HBAs,
-  Creating datastore same size as the volume/template itself
- Deploying OVF template
- Unregistering VM
- Moving files around
- unmounting VMFS
- Removing iSCSI static targets
- Rescan HBA
- Again adding iSCSI static targets
- Rescan HBAs
- Rescan VMFS
- RENAME datastore
- unmount datastore
- remove iSCSI targets.

The error from the ACS is:
message: Datastore '-iqn.2010-01.com.solidfire:hl1k.root-32.29-0' is not
accessible. No connected and accessible host is attached to this datastore

The problem is - this datastore (in it's latest, renamed state) - is
unmounted from ESXI hosts, but can't be removed, NOR can I mount it - I get
the vCenter message of "Operation failed, diagnostics report: Unable to
find volume uuid[5d7abd9a-273aa9d5-bffe-1e00d4010711] lvm
[snap-329aa3ea-5d7abd01-a5c83210-c87c-1e00d4010711] devices"

So something is broken here...

Will try other scenarios soon (SolidFire Shared, etc).







On Thu, 5 Sep 2019 at 11:33, Andrija Panic <[email protected]> wrote:

> That sounds OK to me, the steps to add SF. That should create a datastore
> per each created volume you create (if not mistaken). The other way is to
> use SolidFireShared plugin, which should create a single datastore and
> place all volumes in it (datastore=LUN=single SF volume).
>
> Can you please answer my question from previous email, and also you can
> chech for datastore statuses in vCenter for any error, something is not
> right...
>
> Andrija
>
> On Thu, Sep 5, 2019, 16:15 <[email protected]> wrote:
>
>> Hi Andrija,
>>
>> > On 5. Sep 2019, at 15:07, Andrija Panic <[email protected]>
>> wrote:
>>
>> > the message is that no host is connected to that specific datastore -
>> > "Unable to start VM on Host[-1-Routing] due to StartCommand failed due
>> to
>> > Exception: java.lang.RuntimeException
>> > Message: Datastore '-iqn.2010-01.com.solidfire:x64j.root-29.17-0' is not
>> > accessible. ***No connected and accessible host is attached to this
>> > datastore***."
>> >
>> > You can see that message being returned by VMware actually, not ACS (I
>> > checked the code for that message - no results)
>> >
>> https://vmninja.wordpress.com/2019/04/05/remove-inaccessible-datastore-from-inventory/
>> >
>> I assumed the message did come from ACS since there were no corresponding
>> messages in vcenter.
>> Before every new deployment attempt I get sure there are not leftovers,
>> neither in vcenter nor on the solidfire.
>>
>> >
>> > Can you describe exactly how did you add SF to the ACS/VMware - you
>> already
>> > wrote you created iSCSI HBAs....? What are the parameters/options used
>> to
>> > add SF as Primary Storage to ACS? I expect (since no proper
>> documentation
>> > yet) that you might have somehow incomplete or wrong setup in place.
>> In vcenter there was already a VMkernel interface which is used for nfs
>> datastrores.
>> So first I created a ISCSI Software Adapter and added the existing
>> VMkernel interface via the network port binding option to the ISCSI
>> Software Adapter.
>> Afterwards I did a force re-connect of the host in ACS.
>>
>> Then I followed the youtube guide by Mike to add the solidfire as primary
>> storage with
>> Protocol = custom
>> Provider = solidfire
>> Managed = true
>> Filled in IOPS and Capacity and the URL as follows:
>>
>> MVIP=<MVIP>;SVIP=<SVIP>;clusterAdminUsername=<USERNAME>;clusterAdminPassword=<PASSWORD>;clusterDefaultMinIops=100;clusterDefaultMaxIops=200;clusterDefaultBurstIopsPercentOfMaxIops=1.5
>>
>> The odd thing is that I managed to have one deployment working, but only
>> once.
>> This VM was running fine, I could ssh to it, use the console etc...
>>
>> Regards
>> Christian
>> > On Thu, 5 Sep 2019 at 13:29, <[email protected]> wrote:
>> >
>> >> Thanks for taking time to look into it.
>> >>
>> >> https://pastebin.com/utPhEVkW
>> >>
>> >> Regards
>> >> Christian
>> >>> On 5. Sep 2019, at 13:16, Andrija Panic <[email protected]>
>> wrote:
>> >>>
>> >>> Can you share the mgmt logs when the problem happens? Please upload to
>> >>> pastebin or similar.
>> >>>
>> >>> Andrija
>> >>>
>> >>> On Thu, 5 Sep 2019 at 11:45, <[email protected]>
>> wrote:
>> >>>
>> >>>> Hi,
>> >>>>
>> >>>> I have managed to overcome the problem by forcing Cloudstack to
>> >> reconnect
>> >>>> the host after I configured the iSCSI HBA.
>> >>>> It seems that Cloudstack also scans for such capabilities during the
>> >>>> reconnect.
>> >>>>
>> >>>> But now I have trouble deploying VMs on the storage, sometimes it is
>> >>>> successful and sometimes not, I was not able to find a pattern.
>> >>>> If the deployment fails Cloudstack says that the IQN was not
>> reachable
>> >> by
>> >>>> the host, the odd part is that there is no such message in vcenter
>> so it
>> >>>> seems there was no attempt to attach the storage to the ESXi…
>> >>>>
>> >>>> Does anyone know this kind of issue?
>> >>>>
>> >>>> Regards
>> >>>> Christian
>> >>>>
>> >>>>
>> >>>> On 2019/09/04 14:13:36, <[email protected]> wrote:
>> >>>>> Hi,>
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>> we are currently doing a PoC with SolidFire and CloudStack and
>> trying
>> >> to
>> >>>> figure out if it’s a fitting solution for our use cases.>
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>> But I am stuck at the point when CloudStack tries to create a VM on
>> the
>> >>>> solid fire storage.>
>> >>>>>
>> >>>>> I can see that it has already copied the template to a SolidFire
>> Volume
>> >>>> but then the error message "Not all hosts in the compute cluster
>> support
>> >>>> iSCSI.” appears in the logs.>
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>> On the ESXi I have created a iSCSI HBA and attached it to a VMKernel
>> >>>> adapter, is there anything else to do?>
>> >>>>>
>> >>>>> Is there any documentation for the setup?>
>> >>>>>
>> >>>>> I have only found the youtube videos by Mike, but they does not
>> focus
>> >> on
>> >>>> the vsphere setup part.>
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>> Regards>
>> >>>>>
>> >>>>> Christian>
>> >>>>
>> >>>>
>> >>>> --
>> >>>> Christian Kirmse
>> >>>> Fraunhofer-Gesellschaft e.V. / Zentrale
>> >>>> Abteilung C7 Kommunikationsmanagement
>> >>>> Schloss Birlinghoven, 53754 Sankt Augustin
>> >>>> Tel: (+49 2241) 14-2719
>> >>>> Fax: (+49 2241) 144-2719
>> >>>> mailto:[email protected]
>> >>>> http://www.fraunhofer.de
>> >>>>
>> >>>>
>> >>>
>> >>> --
>> >>>
>> >>> Andrija Panić
>> >>
>> >>
>> >
>> > --
>> >
>> > Andrija Panić
>>
>>

-- 

Andrija Panić

Reply via email to