Ahh Packstack does not have pcs.. that’s only OOO. 
And what does your /var/log/cinder/cinder-volume say?
 
Is this all in one? Split? Looks like your cinder is having some issues..  
check out targetcli here is a ref page

https://www.certdepot.net/rhel7-configure-iscsi-target-initiator-persistently/ 
<https://www.certdepot.net/rhel7-configure-iscsi-target-initiator-persistently/>

https://docs.openstack.org/mitaka/install-guide-rdo/cinder-storage-install.html 
<https://docs.openstack.org/mitaka/install-guide-rdo/cinder-storage-install.html>



> On Mar 20, 2018, at 10:03 PM, Father Vlasie <fv@spots.school> wrote:
> 
> RDO PackStack
> 
> https://www.rdoproject.org/install/packstack/ 
> <https://www.rdoproject.org/install/packstack/>
> 
> 
>> On Mar 20, 2018, at 9:35 PM, r...@italy1.com <mailto:r...@italy1.com> wrote:
>> 
>> How did you install OpenStack? 
>> 
>>  dal mio iPhone X 
>> 
>>> Il giorno 20 mar 2018, alle ore 18:29, Father Vlasie <fv@spots.school 
>>> <mailto:fv@spots.school>> ha scritto:
>>> 
>>> [root@plato ~]# pcs status
>>> -bash: pcs: command not found
>>> 
>>> 
>>>> On Mar 20, 2018, at 6:28 PM, Remo Mattei <r...@italy1.com 
>>>> <mailto:r...@italy1.com>> wrote:
>>>> 
>>>> Looks like your pacemaker is not running check that out! 
>>>> 
>>>> sudo pcs status 
>>>> 
>>>>> On Mar 20, 2018, at 6:24 PM, Father Vlasie <fv@spots.school 
>>>>> <mailto:fv@spots.school>> wrote:
>>>>> 
>>>>> Your help is much appreciated! Thank you.
>>>>> 
>>>>> The cinder service is running on the controller node and it is using a 
>>>>> disk partition not the loopback device, I did change the default 
>>>>> configuration during install with PackStack.
>>>>> 
>>>>> [root@plato ~]# pvs
>>>>> PV         VG             Fmt  Attr PSize    PFree   
>>>>> /dev/vda3  centos         lvm2 a--  1022.80g    4.00m
>>>>> /dev/vdb1  cinder-volumes lvm2 a--   <10.00t <511.85g
>>>>> 
>>>>> [root@plato ~]# lvchange -a y volume-29fa3b6d-1cbf-40db-82bb-1756c6fac9a5
>>>>> Volume group "volume-29fa3b6d-1cbf-40db-82bb-1756c6fac9a5" not found
>>>>> Cannot process volume group volume-29fa3b6d-1cbf-40db-82bb-1756c6fac9a5
>>>>> 
>>>>> [root@plato ~]# lvchange -a y cinder-volumes
>>>>> Thin pool cinder--volumes-cinder--volumes--pool-tpool (253:5) 
>>>>> transaction_id is 0, while expected 72.
>>>>> Thin pool cinder--volumes-cinder--volumes--pool-tpool (253:5) 
>>>>> transaction_id is 0, while expected 72.
>>>>> Thin pool cinder--volumes-cinder--volumes--pool-tpool (253:5) 
>>>>> transaction_id is 0, while expected 72.
>>>>> Thin pool cinder--volumes-cinder--volumes--pool-tpool (253:5) 
>>>>> transaction_id is 0, while expected 72.
>>>>> Thin pool cinder--volumes-cinder--volumes--pool-tpool (253:5) 
>>>>> transaction_id is 0, while expected 72.
>>>>> Thin pool cinder--volumes-cinder--volumes--pool-tpool (253:5) 
>>>>> transaction_id is 0, while expected 72.
>>>>> Thin pool cinder--volumes-cinder--volumes--pool-tpool (253:5) 
>>>>> transaction_id is 0, while expected 72.
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>>> On Mar 20, 2018, at 6:05 PM, Vagner Farias <vfar...@redhat.com 
>>>>>> <mailto:vfar...@redhat.com>> wrote:
>>>>>> 
>>>>>> Will "lvchange -a y lvname" activate it?
>>>>>> 
>>>>>> If not, considering that you're using Pike on Centos, there's a chance 
>>>>>> you may be using the cinder-volumes backed by a loopback file.  I guess 
>>>>>> both packstack & tripleo will configure this by default if you don't 
>>>>>> change the configuration. At least tripleo won't configure this loopback 
>>>>>> device to be activated automatically on boot. An option would be to 
>>>>>> include lines like the following in /etc/rc.d/rc.local:
>>>>>> 
>>>>>> losetup /dev/loop0 /var/lib/cinder/cinder-volumes
>>>>>> vgscan
>>>>>> 
>>>>>> Last but not least, if this is actually the case, I wouldn't recommend 
>>>>>> using loopback devices for LVM SCSI driver. In fact, if you can use any 
>>>>>> other driver capable of delivering HA, it'd be better (unless this is 
>>>>>> some POC or an environment without tight SLAs). 
>>>>>> 
>>>>>> Vagner Farias
>>>>>> 
>>>>>> 
>>>>>> Em ter, 20 de mar de 2018 21:24, Father Vlasie <fv@spots.school 
>>>>>> <mailto:fv@spots.school>> escreveu:
>>>>>> Here is the output of lvdisplay:
>>>>>> 
>>>>>> [root@plato ~]# lvdisplay
>>>>>> --- Logical volume ---
>>>>>> LV Name                cinder-volumes-pool
>>>>>> VG Name                cinder-volumes
>>>>>> LV UUID                PEkGKb-fhAc-CJD2-uDDA-k911-SIX9-1uyvFo
>>>>>> LV Write Access        read/write
>>>>>> LV Creation host, time plato, 2018-02-01 13:33:51 -0800
>>>>>> LV Pool metadata       cinder-volumes-pool_tmeta
>>>>>> LV Pool data           cinder-volumes-pool_tdata
>>>>>> LV Status              NOT available
>>>>>> LV Size                9.50 TiB
>>>>>> Current LE             2490368
>>>>>> Segments               1
>>>>>> Allocation             inherit
>>>>>> Read ahead sectors     auto
>>>>>> 
>>>>>> --- Logical volume ---
>>>>>> LV Path                
>>>>>> /dev/cinder-volumes/volume-8f4a5fff-749f-47fe-976f-6157f58a4d9e
>>>>>> LV Name                volume-8f4a5fff-749f-47fe-976f-6157f58a4d9e
>>>>>> VG Name                cinder-volumes
>>>>>> LV UUID                C2o7UD-uqFp-3L3r-F0Ys-etjp-QBJr-idBhb0
>>>>>> LV Write Access        read/write
>>>>>> LV Creation host, time plato, 2018-02-02 10:18:41 -0800
>>>>>> LV Pool name           cinder-volumes-pool
>>>>>> LV Status              NOT available
>>>>>> LV Size                1.00 GiB
>>>>>> Current LE             256
>>>>>> Segments               1
>>>>>> Allocation             inherit
>>>>>> Read ahead sectors     auto
>>>>>> 
>>>>>> --- Logical volume ---
>>>>>> LV Path                
>>>>>> /dev/cinder-volumes/volume-6ad82e98-c8e2-4837-bffd-079cf76afbe3
>>>>>> LV Name                volume-6ad82e98-c8e2-4837-bffd-079cf76afbe3
>>>>>> VG Name                cinder-volumes
>>>>>> LV UUID                qisf80-j4XV-PpFy-f7yt-ZpJS-99v0-m03Ql4
>>>>>> LV Write Access        read/write
>>>>>> LV Creation host, time plato, 2018-02-02 10:26:46 -0800
>>>>>> LV Pool name           cinder-volumes-pool
>>>>>> LV Status              NOT available
>>>>>> LV Size                1.00 GiB
>>>>>> Current LE             256
>>>>>> Segments               1
>>>>>> Allocation             inherit
>>>>>> Read ahead sectors     auto
>>>>>> 
>>>>>> --- Logical volume ---
>>>>>> LV Path                
>>>>>> /dev/cinder-volumes/volume-ee107488-2559-4116-aa7b-0da02fd5f693
>>>>>> LV Name                volume-ee107488-2559-4116-aa7b-0da02fd5f693
>>>>>> VG Name                cinder-volumes
>>>>>> LV UUID                FS9Y2o-HYe2-HK03-yM0Z-P7GO-kAzD-cOYNTb
>>>>>> LV Write Access        read/write
>>>>>> LV Creation host, time plato.spots.onsite, 2018-02-12 10:28:57 -0800
>>>>>> LV Pool name           cinder-volumes-pool
>>>>>> LV Status              NOT available
>>>>>> LV Size                40.00 GiB
>>>>>> Current LE             10240
>>>>>> Segments               1
>>>>>> Allocation             inherit
>>>>>> Read ahead sectors     auto
>>>>>> 
>>>>>> --- Logical volume ---
>>>>>> LV Path                
>>>>>> /dev/cinder-volumes/volume-d6f0260d-21b5-43e7-afe5-84e0502fa734
>>>>>> LV Name                volume-d6f0260d-21b5-43e7-afe5-84e0502fa734
>>>>>> VG Name                cinder-volumes
>>>>>> LV UUID                b6pX01-mOEH-3j3K-32NJ-OHsz-UMQe-y10vSM
>>>>>> LV Write Access        read/write
>>>>>> LV Creation host, time plato.spots.onsite, 2018-02-14 14:24:41 -0800
>>>>>> LV Pool name           cinder-volumes-pool
>>>>>> LV Status              NOT available
>>>>>> LV Size                40.00 GiB
>>>>>> Current LE             10240
>>>>>> Segments               1
>>>>>> Allocation             inherit
>>>>>> Read ahead sectors     auto
>>>>>> 
>>>>>> --- Logical volume ---
>>>>>> LV Path                
>>>>>> /dev/cinder-volumes/volume-a7bd0bc8-8cbc-4053-bdc2-2eb9bfb0f147
>>>>>> LV Name                volume-a7bd0bc8-8cbc-4053-bdc2-2eb9bfb0f147
>>>>>> VG Name                cinder-volumes
>>>>>> LV UUID                T07JAE-3CNU-CpwN-BUEr-aAJG-VxP5-1qFYZz
>>>>>> LV Write Access        read/write
>>>>>> LV Creation host, time plato.spots.onsite, 2018-03-12 10:33:24 -0700
>>>>>> LV Pool name           cinder-volumes-pool
>>>>>> LV Status              NOT available
>>>>>> LV Size                4.00 GiB
>>>>>> Current LE             1024
>>>>>> Segments               1
>>>>>> Allocation             inherit
>>>>>> Read ahead sectors     auto
>>>>>> 
>>>>>> --- Logical volume ---
>>>>>> LV Path                
>>>>>> /dev/cinder-volumes/volume-29fa3b6d-1cbf-40db-82bb-1756c6fac9a5
>>>>>> LV Name                volume-29fa3b6d-1cbf-40db-82bb-1756c6fac9a5
>>>>>> VG Name                cinder-volumes
>>>>>> LV UUID                IB0q1n-NnkR-tx5w-BbBu-LamG-jCbQ-mYXWyC
>>>>>> LV Write Access        read/write
>>>>>> LV Creation host, time plato.spots.onsite, 2018-03-14 09:52:14 -0700
>>>>>> LV Pool name           cinder-volumes-pool
>>>>>> LV Status              NOT available
>>>>>> LV Size                40.00 GiB
>>>>>> Current LE             10240
>>>>>> Segments               1
>>>>>> Allocation             inherit
>>>>>> Read ahead sectors     auto
>>>>>> 
>>>>>> --- Logical volume ---
>>>>>> LV Path                /dev/centos/root
>>>>>> LV Name                root
>>>>>> VG Name                centos
>>>>>> LV UUID                nawE4n-dOHs-VsNH-f9hL-te05-mvGC-WoFQzv
>>>>>> LV Write Access        read/write
>>>>>> LV Creation host, time localhost, 2018-01-22 09:50:38 -0800
>>>>>> LV Status              available
>>>>>> # open                 1
>>>>>> LV Size                50.00 GiB
>>>>>> Current LE             12800
>>>>>> Segments               1
>>>>>> Allocation             inherit
>>>>>> Read ahead sectors     auto
>>>>>> - currently set to     8192
>>>>>> Block device           253:0
>>>>>> 
>>>>>> --- Logical volume ---
>>>>>> LV Path                /dev/centos/swap
>>>>>> LV Name                swap
>>>>>> VG Name                centos
>>>>>> LV UUID                Vvlni4-nwTl-ORwW-Gg8b-5y4h-kXJ5-T67cKU
>>>>>> LV Write Access        read/write
>>>>>> LV Creation host, time localhost, 2018-01-22 09:50:38 -0800
>>>>>> LV Status              available
>>>>>> # open                 2
>>>>>> LV Size                8.12 GiB
>>>>>> Current LE             2080
>>>>>> Segments               1
>>>>>> Allocation             inherit
>>>>>> Read ahead sectors     auto
>>>>>> - currently set to     8192
>>>>>> Block device           253:1
>>>>>> 
>>>>>> --- Logical volume ---
>>>>>> LV Path                /dev/centos/home
>>>>>> LV Name                home
>>>>>> VG Name                centos
>>>>>> LV UUID                lCXJ7v-jeOC-DFKI-unXa-HUKx-9DXp-nmzSMg
>>>>>> LV Write Access        read/write
>>>>>> LV Creation host, time localhost, 2018-01-22 09:50:39 -0800
>>>>>> LV Status              available
>>>>>> # open                 1
>>>>>> LV Size                964.67 GiB
>>>>>> Current LE             246956
>>>>>> Segments               1
>>>>>> Allocation             inherit
>>>>>> Read ahead sectors     auto
>>>>>> - currently set to     8192
>>>>>> Block device           253:2
>>>>>> 
>>>>>> 
>>>>>>> On Mar 20, 2018, at 4:51 PM, Remo Mattei <r...@italy1.com 
>>>>>>> <mailto:r...@italy1.com>> wrote:
>>>>>>> 
>>>>>>> I think you need to provide a bit of additional info. Did you look at 
>>>>>>> the logs? What version of os are you running? Etc.
>>>>>>> 
>>>>>>> Inviato da iPhone
>>>>>>> 
>>>>>>>> Il giorno 20 mar 2018, alle ore 16:15, Father Vlasie <fv@spots.school 
>>>>>>>> <mailto:fv@spots.school>> ha scritto:
>>>>>>>> 
>>>>>>>> Hello everyone,
>>>>>>>> 
>>>>>>>> I am in need of help with my Cinder volumes which have all become 
>>>>>>>> unavailable.
>>>>>>>> 
>>>>>>>> Is there anyone who would be willing to log in to my system and have a 
>>>>>>>> look?
>>>>>>>> 
>>>>>>>> My cinder volumes are listed as "NOT available" and my attempts to 
>>>>>>>> mount them have been in vain. I have tried: vgchange -a y
>>>>>>>> 
>>>>>>>> with result showing as:  0 logical volume(s) in volume group 
>>>>>>>> "cinder-volumes" now active
>>>>>>>> 
>>>>>>>> I am a bit desperate because some of the data is critical and, I am 
>>>>>>>> ashamed to say, I do not have a backup.
>>>>>>>> 
>>>>>>>> Any help or suggestions would be very much appreciated.
>>>>>>>> 
>>>>>>>> FV
>>>>>>>> _______________________________________________
>>>>>>>> Mailing list: 
>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack 
>>>>>>>> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>
>>>>>>>> Post to     : openstack@lists.openstack.org 
>>>>>>>> <mailto:openstack@lists.openstack.org>
>>>>>>>> Unsubscribe : 
>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack 
>>>>>>>> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> _______________________________________________
>>>>>> Mailing list: 
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack 
>>>>>> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>
>>>>>> Post to     : openstack@lists.openstack.org 
>>>>>> <mailto:openstack@lists.openstack.org>
>>>>>> Unsubscribe : 
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack 
>>>>>> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>
>>>>> 
>>>> 
>>> 
> 

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to