I am not sure what to do with targetcli but here is the configuration data for 
one of the volumes:

 {
      "fabric": "iscsi", 
      "tpgs": [
        {
          "attributes": {
            "authentication": 1, 
            "cache_dynamic_acls": 0, 
            "default_cmdsn_depth": 64, 
            "default_erl": 0, 
            "demo_mode_discovery": 1, 
            "demo_mode_write_protect": 1, 
            "generate_node_acls": 0, 
            "login_timeout": 15, 
            "netif_timeout": 2, 
            "prod_mode_write_protect": 0, 
            "t10_pi": 0, 
            "tpg_enabled_sendtargets": 1
          }, 
          "enable": true, 
          "luns": [], 
          "node_acls": [
            {
              "attributes": {
                "dataout_timeout": 3, 
                "dataout_timeout_retries": 5, 
                "default_erl": 0, 
                "nopin_response_timeout": 30, 
                "nopin_timeout": 15, 
                "random_datain_pdu_offsets": 0, 
                "random_datain_seq_offsets": 0, 
                "random_r2t_offsets": 0
              }, 
              "chap_password": "QiDXtwCz6RNyhjoY", 
              "chap_userid": "7o9NAiS4ja7ZbQXPY6Fm", 
              "mapped_luns": [], 
              "node_wwn": "iqn.1994-05.com.redhat:3c791e84a21"
            }
          ], 
          "parameters": {
            "AuthMethod": "CHAP", 
            "DataDigest": "CRC32C,None", 
            "DataPDUInOrder": "Yes", 
            "DataSequenceInOrder": "Yes", 
            "DefaultTime2Retain": "20", 
            "DefaultTime2Wait": "2", 
            "ErrorRecoveryLevel": "0", 
            "FirstBurstLength": "65536", 
            "HeaderDigest": "CRC32C,None", 
            "IFMarkInt": "2048~65535", 
            "IFMarker": "No", 
            "ImmediateData": "Yes", 
            "InitialR2T": "Yes", 
            "MaxBurstLength": "262144", 
            "MaxConnections": "1", 
            "MaxOutstandingR2T": "1", 
            "MaxRecvDataSegmentLength": "8192", 
            "MaxXmitDataSegmentLength": "262144", 
            "OFMarkInt": "2048~65535", 
            "OFMarker": "No", 
            "TargetAlias": "LIO Target"
          }, 
          "portals": [
            {
              "ip_address": "192.168.3.11", 
              "iser": false, 
              "offload": false, 
              "port": 3260
            }
          ], 
          "tag": 1
        }
      ], 
      "wwn": 
"iqn.2010-10.org.openstack:volume-6ad82e98-c8e2-4837-bffd-079cf76afbe3"
    },

> On Mar 21, 2018, at 5:35 PM, r...@italy1.com wrote:
> 
> Ok the pool is ok looks like you have several volumes probably vms. Did you 
> check targetcli? I cannot remember what your cinder-volume says in the log. 
> Can you try to create a volume cinder create 1 and look at the log and see 
> what the error is add —debug to the command its a dash dash somehow my iPhone 
> converted it 
> 
>  dal mio iPhone X 
> 
> Il giorno 21 mar 2018, alle ore 17:28, Father Vlasie <fv@spots.school> ha 
> scritto:
> 
>> [root@plato ~]# lvdisplay
>>   --- Logical volume ---
>>   LV Name                cinder-volumes-pool
>>   VG Name                cinder-volumes
>>   LV UUID                PEkGKb-fhAc-CJD2-uDDA-k911-SIX9-1uyvFo
>>   LV Write Access        read/write
>>   LV Creation host, time plato, 2018-02-01 13:33:51 -0800
>>   LV Pool metadata       cinder-volumes-pool_tmeta
>>   LV Pool data           cinder-volumes-pool_tdata
>>   LV Status              NOT available
>>   LV Size                9.50 TiB
>>   Current LE             2490368
>>   Segments               1
>>   Allocation             inherit
>>   Read ahead sectors     auto
>>    
>>   --- Logical volume ---
>>   LV Path                
>> /dev/cinder-volumes/volume-8f4a5fff-749f-47fe-976f-6157f58a4d9e
>>   LV Name                volume-8f4a5fff-749f-47fe-976f-6157f58a4d9e
>>   VG Name                cinder-volumes
>>   LV UUID                C2o7UD-uqFp-3L3r-F0Ys-etjp-QBJr-idBhb0
>>   LV Write Access        read/write
>>   LV Creation host, time plato, 2018-02-02 10:18:41 -0800
>>   LV Pool name           cinder-volumes-pool
>>   LV Status              NOT available
>>   LV Size                1.00 GiB
>>   Current LE             256
>>   Segments               1
>>   Allocation             inherit
>>   Read ahead sectors     auto
>>    
>>   --- Logical volume ---
>>   LV Path                
>> /dev/cinder-volumes/volume-6ad82e98-c8e2-4837-bffd-079cf76afbe3
>>   LV Name                volume-6ad82e98-c8e2-4837-bffd-079cf76afbe3
>>   VG Name                cinder-volumes
>>   LV UUID                qisf80-j4XV-PpFy-f7yt-ZpJS-99v0-m03Ql4
>>   LV Write Access        read/write
>>   LV Creation host, time plato, 2018-02-02 10:26:46 -0800
>>   LV Pool name           cinder-volumes-pool
>>   LV Status              NOT available
>>   LV Size                1.00 GiB
>>   Current LE             256
>>   Segments               1
>>   Allocation             inherit
>>   Read ahead sectors     auto
>>    
>>   --- Logical volume ---
>>   LV Path                
>> /dev/cinder-volumes/volume-ee107488-2559-4116-aa7b-0da02fd5f693
>>   LV Name                volume-ee107488-2559-4116-aa7b-0da02fd5f693
>>   VG Name                cinder-volumes
>>   LV UUID                FS9Y2o-HYe2-HK03-yM0Z-P7GO-kAzD-cOYNTb
>>   LV Write Access        read/write
>>   LV Creation host, time plato.spots.onsite, 2018-02-12 10:28:57 -0800
>>   LV Pool name           cinder-volumes-pool
>>   LV Status              NOT available
>>   LV Size                40.00 GiB
>>   Current LE             10240
>>   Segments               1
>>   Allocation             inherit
>>   Read ahead sectors     auto
>>    
>>   --- Logical volume ---
>>   LV Path                
>> /dev/cinder-volumes/volume-d6f0260d-21b5-43e7-afe5-84e0502fa734
>>   LV Name                volume-d6f0260d-21b5-43e7-afe5-84e0502fa734
>>   VG Name                cinder-volumes
>>   LV UUID                b6pX01-mOEH-3j3K-32NJ-OHsz-UMQe-y10vSM
>>   LV Write Access        read/write
>>   LV Creation host, time plato.spots.onsite, 2018-02-14 14:24:41 -0800
>>   LV Pool name           cinder-volumes-pool
>>   LV Status              NOT available
>>   LV Size                40.00 GiB
>>   Current LE             10240
>>   Segments               1
>>   Allocation             inherit
>>   Read ahead sectors     auto
>>    
>>   --- Logical volume ---
>>   LV Path                
>> /dev/cinder-volumes/volume-a7bd0bc8-8cbc-4053-bdc2-2eb9bfb0f147
>>   LV Name                volume-a7bd0bc8-8cbc-4053-bdc2-2eb9bfb0f147
>>   VG Name                cinder-volumes
>>   LV UUID                T07JAE-3CNU-CpwN-BUEr-aAJG-VxP5-1qFYZz
>>   LV Write Access        read/write
>>   LV Creation host, time plato.spots.onsite, 2018-03-12 10:33:24 -0700
>>   LV Pool name           cinder-volumes-pool
>>   LV Status              NOT available
>>   LV Size                4.00 GiB
>>   Current LE             1024
>>   Segments               1
>>   Allocation             inherit
>>   Read ahead sectors     auto
>>    
>>   --- Logical volume ---
>>   LV Path                
>> /dev/cinder-volumes/volume-29fa3b6d-1cbf-40db-82bb-1756c6fac9a5
>>   LV Name                volume-29fa3b6d-1cbf-40db-82bb-1756c6fac9a5
>>   VG Name                cinder-volumes
>>   LV UUID                IB0q1n-NnkR-tx5w-BbBu-LamG-jCbQ-mYXWyC
>>   LV Write Access        read/write
>>   LV Creation host, time plato.spots.onsite, 2018-03-14 09:52:14 -0700
>>   LV Pool name           cinder-volumes-pool
>>   LV Status              NOT available
>>   LV Size                40.00 GiB
>>   Current LE             10240
>>   Segments               1
>>   Allocation             inherit
>>   Read ahead sectors     auto
>>    
>>   --- Logical volume ---
>>   LV Path                /dev/centos/root
>>   LV Name                root
>>   VG Name                centos
>>   LV UUID                nawE4n-dOHs-VsNH-f9hL-te05-mvGC-WoFQzv
>>   LV Write Access        read/write
>>   LV Creation host, time localhost, 2018-01-22 09:50:38 -0800
>>   LV Status              available
>>   # open                 1
>>   LV Size                50.00 GiB
>>   Current LE             12800
>>   Segments               1
>>   Allocation             inherit
>>   Read ahead sectors     auto
>>   - currently set to     8192
>>   Block device           253:0
>>    
>>   --- Logical volume ---
>>   LV Path                /dev/centos/swap
>>   LV Name                swap
>>   VG Name                centos
>>   LV UUID                Vvlni4-nwTl-ORwW-Gg8b-5y4h-kXJ5-T67cKU
>>   LV Write Access        read/write
>>   LV Creation host, time localhost, 2018-01-22 09:50:38 -0800
>>   LV Status              available
>>   # open                 2
>>   LV Size                8.12 GiB
>>   Current LE             2080
>>   Segments               1
>>   Allocation             inherit
>>   Read ahead sectors     auto
>>   - currently set to     8192
>>   Block device           253:1
>>    
>>   --- Logical volume ---
>>   LV Path                /dev/centos/home
>>   LV Name                home
>>   VG Name                centos
>>   LV UUID                lCXJ7v-jeOC-DFKI-unXa-HUKx-9DXp-nmzSMg
>>   LV Write Access        read/write
>>   LV Creation host, time localhost, 2018-01-22 09:50:39 -0800
>>   LV Status              available
>>   # open                 1
>>   LV Size                964.67 GiB
>>   Current LE             246956
>>   Segments               1
>>   Allocation             inherit
>>   Read ahead sectors     auto
>>   - currently set to     8192
>>   Block device           253:2
>>    
>> 
>> 
>>> On Mar 21, 2018, at 5:25 PM, r...@italy1.com wrote:
>>> 
>>> Can you do an lvdisplay 
>>> 
>>>  dal mio iPhone X 
>>> 
>>> Il giorno 21 mar 2018, alle ore 17:23, Father Vlasie <fv@spots.school> ha 
>>> scritto:
>>> 
>>>> About 12TB altogether.
>>>> 
>>>>> On Mar 21, 2018, at 5:21 PM, r...@italy1.com wrote:
>>>>> 
>>>>> How much space do you have?
>>>>> 
>>>>>  dal mio iPhone X 
>>>>> 
>>>>> Il giorno 21 mar 2018, alle ore 17:10, Father Vlasie <fv@spots.school> ha 
>>>>> scritto:
>>>>> 
>>>>>> Yes, I agree, it does seem to be an LVM issue rather than cinder. I will 
>>>>>> pursue that course.
>>>>>> 
>>>>>> Thank you all for your help, it is fantastic having a support mailing 
>>>>>> list like this!
>>>>>> 
>>>>>> FV
>>>>>> 
>>>>>>> On Mar 21, 2018, at 4:45 AM, Vagner Farias <vfar...@redhat.com> wrote:
>>>>>>> 
>>>>>>> It seems your LVM thin pool metadata is corrupt. I'm not familiar with 
>>>>>>> this issue and can't guide you on how to fix it. Although this could 
>>>>>>> have been caused by cinder, it's an LVM issue and if you don't get more 
>>>>>>> answers here you may try some Linux related forum. 
>>>>>>> 
>>>>>>> On a quick search on "lvm2 thinpool metadata mismatch" I could find 
>>>>>>> several possible causes and solution paths. 
>>>>>>> 
>>>>>>> I hope that helps. 
>>>>>>> 
>>>>>>> Vagner Farias
>>>>>>> 
>>>>>>> 
>>>>>>> Em ter, 20 de mar de 2018 22:29, Father Vlasie <fv@spots.school> 
>>>>>>> escreveu:
>>>>>>> Your help is much appreciated! Thank you.
>>>>>>> 
>>>>>>> The cinder service is running on the controller node and it is using a 
>>>>>>> disk partition not the loopback device, I did change the default 
>>>>>>> configuration during install with PackStack.
>>>>>>> 
>>>>>>> [root@plato ~]# pvs
>>>>>>>   PV         VG             Fmt  Attr PSize    PFree
>>>>>>>   /dev/vda3  centos         lvm2 a--  1022.80g    4.00m
>>>>>>>   /dev/vdb1  cinder-volumes lvm2 a--   <10.00t <511.85g
>>>>>>> 
>>>>>>> [root@plato ~]# lvchange -a y 
>>>>>>> volume-29fa3b6d-1cbf-40db-82bb-1756c6fac9a5
>>>>>>>   Volume group "volume-29fa3b6d-1cbf-40db-82bb-1756c6fac9a5" not found
>>>>>>>   Cannot process volume group 
>>>>>>> volume-29fa3b6d-1cbf-40db-82bb-1756c6fac9a5
>>>>>>> 
>>>>>>> [root@plato ~]# lvchange -a y cinder-volumes
>>>>>>>   Thin pool cinder--volumes-cinder--volumes--pool-tpool (253:5) 
>>>>>>> transaction_id is 0, while expected 72.
>>>>>>>   Thin pool cinder--volumes-cinder--volumes--pool-tpool (253:5) 
>>>>>>> transaction_id is 0, while expected 72.
>>>>>>>   Thin pool cinder--volumes-cinder--volumes--pool-tpool (253:5) 
>>>>>>> transaction_id is 0, while expected 72.
>>>>>>>   Thin pool cinder--volumes-cinder--volumes--pool-tpool (253:5) 
>>>>>>> transaction_id is 0, while expected 72.
>>>>>>>   Thin pool cinder--volumes-cinder--volumes--pool-tpool (253:5) 
>>>>>>> transaction_id is 0, while expected 72.
>>>>>>>   Thin pool cinder--volumes-cinder--volumes--pool-tpool (253:5) 
>>>>>>> transaction_id is 0, while expected 72.
>>>>>>>   Thin pool cinder--volumes-cinder--volumes--pool-tpool (253:5) 
>>>>>>> transaction_id is 0, while expected 72.
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> > On Mar 20, 2018, at 6:05 PM, Vagner Farias <vfar...@redhat.com> wrote:
>>>>>>> >
>>>>>>> > Will "lvchange -a y lvname" activate it?
>>>>>>> >
>>>>>>> > If not, considering that you're using Pike on Centos, there's a 
>>>>>>> > chance you may be using the cinder-volumes backed by a loopback file. 
>>>>>>> >  I guess both packstack & tripleo will configure this by default if 
>>>>>>> > you don't change the configuration. At least tripleo won't configure 
>>>>>>> > this loopback device to be activated automatically on boot. An option 
>>>>>>> > would be to include lines like the following in /etc/rc.d/rc.local:
>>>>>>> >
>>>>>>> > losetup /dev/loop0 /var/lib/cinder/cinder-volumes
>>>>>>> > vgscan
>>>>>>> >
>>>>>>> > Last but not least, if this is actually the case, I wouldn't 
>>>>>>> > recommend using loopback devices for LVM SCSI driver. In fact, if you 
>>>>>>> > can use any other driver capable of delivering HA, it'd be better 
>>>>>>> > (unless this is some POC or an environment without tight SLAs).
>>>>>>> >
>>>>>>> > Vagner Farias
>>>>>>> >
>>>>>>> >
>>>>>>> > Em ter, 20 de mar de 2018 21:24, Father Vlasie <fv@spots.school> 
>>>>>>> > escreveu:
>>>>>>> > Here is the output of lvdisplay:
>>>>>>> >
>>>>>>> > [root@plato ~]# lvdisplay
>>>>>>> >   --- Logical volume ---
>>>>>>> >   LV Name                cinder-volumes-pool
>>>>>>> >   VG Name                cinder-volumes
>>>>>>> >   LV UUID                PEkGKb-fhAc-CJD2-uDDA-k911-SIX9-1uyvFo
>>>>>>> >   LV Write Access        read/write
>>>>>>> >   LV Creation host, time plato, 2018-02-01 13:33:51 -0800
>>>>>>> >   LV Pool metadata       cinder-volumes-pool_tmeta
>>>>>>> >   LV Pool data           cinder-volumes-pool_tdata
>>>>>>> >   LV Status              NOT available
>>>>>>> >   LV Size                9.50 TiB
>>>>>>> >   Current LE             2490368
>>>>>>> >   Segments               1
>>>>>>> >   Allocation             inherit
>>>>>>> >   Read ahead sectors     auto
>>>>>>> >
>>>>>>> >   --- Logical volume ---
>>>>>>> >   LV Path                
>>>>>>> > /dev/cinder-volumes/volume-8f4a5fff-749f-47fe-976f-6157f58a4d9e
>>>>>>> >   LV Name                volume-8f4a5fff-749f-47fe-976f-6157f58a4d9e
>>>>>>> >   VG Name                cinder-volumes
>>>>>>> >   LV UUID                C2o7UD-uqFp-3L3r-F0Ys-etjp-QBJr-idBhb0
>>>>>>> >   LV Write Access        read/write
>>>>>>> >   LV Creation host, time plato, 2018-02-02 10:18:41 -0800
>>>>>>> >   LV Pool name           cinder-volumes-pool
>>>>>>> >   LV Status              NOT available
>>>>>>> >   LV Size                1.00 GiB
>>>>>>> >   Current LE             256
>>>>>>> >   Segments               1
>>>>>>> >   Allocation             inherit
>>>>>>> >   Read ahead sectors     auto
>>>>>>> >
>>>>>>> >   --- Logical volume ---
>>>>>>> >   LV Path                
>>>>>>> > /dev/cinder-volumes/volume-6ad82e98-c8e2-4837-bffd-079cf76afbe3
>>>>>>> >   LV Name                volume-6ad82e98-c8e2-4837-bffd-079cf76afbe3
>>>>>>> >   VG Name                cinder-volumes
>>>>>>> >   LV UUID                qisf80-j4XV-PpFy-f7yt-ZpJS-99v0-m03Ql4
>>>>>>> >   LV Write Access        read/write
>>>>>>> >   LV Creation host, time plato, 2018-02-02 10:26:46 -0800
>>>>>>> >   LV Pool name           cinder-volumes-pool
>>>>>>> >   LV Status              NOT available
>>>>>>> >   LV Size                1.00 GiB
>>>>>>> >   Current LE             256
>>>>>>> >   Segments               1
>>>>>>> >   Allocation             inherit
>>>>>>> >   Read ahead sectors     auto
>>>>>>> >
>>>>>>> >   --- Logical volume ---
>>>>>>> >   LV Path                
>>>>>>> > /dev/cinder-volumes/volume-ee107488-2559-4116-aa7b-0da02fd5f693
>>>>>>> >   LV Name                volume-ee107488-2559-4116-aa7b-0da02fd5f693
>>>>>>> >   VG Name                cinder-volumes
>>>>>>> >   LV UUID                FS9Y2o-HYe2-HK03-yM0Z-P7GO-kAzD-cOYNTb
>>>>>>> >   LV Write Access        read/write
>>>>>>> >   LV Creation host, time plato.spots.onsite, 2018-02-12 10:28:57 -0800
>>>>>>> >   LV Pool name           cinder-volumes-pool
>>>>>>> >   LV Status              NOT available
>>>>>>> >   LV Size                40.00 GiB
>>>>>>> >   Current LE             10240
>>>>>>> >   Segments               1
>>>>>>> >   Allocation             inherit
>>>>>>> >   Read ahead sectors     auto
>>>>>>> >
>>>>>>> >   --- Logical volume ---
>>>>>>> >   LV Path                
>>>>>>> > /dev/cinder-volumes/volume-d6f0260d-21b5-43e7-afe5-84e0502fa734
>>>>>>> >   LV Name                volume-d6f0260d-21b5-43e7-afe5-84e0502fa734
>>>>>>> >   VG Name                cinder-volumes
>>>>>>> >   LV UUID                b6pX01-mOEH-3j3K-32NJ-OHsz-UMQe-y10vSM
>>>>>>> >   LV Write Access        read/write
>>>>>>> >   LV Creation host, time plato.spots.onsite, 2018-02-14 14:24:41 -0800
>>>>>>> >   LV Pool name           cinder-volumes-pool
>>>>>>> >   LV Status              NOT available
>>>>>>> >   LV Size                40.00 GiB
>>>>>>> >   Current LE             10240
>>>>>>> >   Segments               1
>>>>>>> >   Allocation             inherit
>>>>>>> >   Read ahead sectors     auto
>>>>>>> >
>>>>>>> >   --- Logical volume ---
>>>>>>> >   LV Path                
>>>>>>> > /dev/cinder-volumes/volume-a7bd0bc8-8cbc-4053-bdc2-2eb9bfb0f147
>>>>>>> >   LV Name                volume-a7bd0bc8-8cbc-4053-bdc2-2eb9bfb0f147
>>>>>>> >   VG Name                cinder-volumes
>>>>>>> >   LV UUID                T07JAE-3CNU-CpwN-BUEr-aAJG-VxP5-1qFYZz
>>>>>>> >   LV Write Access        read/write
>>>>>>> >   LV Creation host, time plato.spots.onsite, 2018-03-12 10:33:24 -0700
>>>>>>> >   LV Pool name           cinder-volumes-pool
>>>>>>> >   LV Status              NOT available
>>>>>>> >   LV Size                4.00 GiB
>>>>>>> >   Current LE             1024
>>>>>>> >   Segments               1
>>>>>>> >   Allocation             inherit
>>>>>>> >   Read ahead sectors     auto
>>>>>>> >
>>>>>>> >   --- Logical volume ---
>>>>>>> >   LV Path                
>>>>>>> > /dev/cinder-volumes/volume-29fa3b6d-1cbf-40db-82bb-1756c6fac9a5
>>>>>>> >   LV Name                volume-29fa3b6d-1cbf-40db-82bb-1756c6fac9a5
>>>>>>> >   VG Name                cinder-volumes
>>>>>>> >   LV UUID                IB0q1n-NnkR-tx5w-BbBu-LamG-jCbQ-mYXWyC
>>>>>>> >   LV Write Access        read/write
>>>>>>> >   LV Creation host, time plato.spots.onsite, 2018-03-14 09:52:14 -0700
>>>>>>> >   LV Pool name           cinder-volumes-pool
>>>>>>> >   LV Status              NOT available
>>>>>>> >   LV Size                40.00 GiB
>>>>>>> >   Current LE             10240
>>>>>>> >   Segments               1
>>>>>>> >   Allocation             inherit
>>>>>>> >   Read ahead sectors     auto
>>>>>>> >
>>>>>>> >   --- Logical volume ---
>>>>>>> >   LV Path                /dev/centos/root
>>>>>>> >   LV Name                root
>>>>>>> >   VG Name                centos
>>>>>>> >   LV UUID                nawE4n-dOHs-VsNH-f9hL-te05-mvGC-WoFQzv
>>>>>>> >   LV Write Access        read/write
>>>>>>> >   LV Creation host, time localhost, 2018-01-22 09:50:38 -0800
>>>>>>> >   LV Status              available
>>>>>>> >   # open                 1
>>>>>>> >   LV Size                50.00 GiB
>>>>>>> >   Current LE             12800
>>>>>>> >   Segments               1
>>>>>>> >   Allocation             inherit
>>>>>>> >   Read ahead sectors     auto
>>>>>>> >   - currently set to     8192
>>>>>>> >   Block device           253:0
>>>>>>> >
>>>>>>> >   --- Logical volume ---
>>>>>>> >   LV Path                /dev/centos/swap
>>>>>>> >   LV Name                swap
>>>>>>> >   VG Name                centos
>>>>>>> >   LV UUID                Vvlni4-nwTl-ORwW-Gg8b-5y4h-kXJ5-T67cKU
>>>>>>> >   LV Write Access        read/write
>>>>>>> >   LV Creation host, time localhost, 2018-01-22 09:50:38 -0800
>>>>>>> >   LV Status              available
>>>>>>> >   # open                 2
>>>>>>> >   LV Size                8.12 GiB
>>>>>>> >   Current LE             2080
>>>>>>> >   Segments               1
>>>>>>> >   Allocation             inherit
>>>>>>> >   Read ahead sectors     auto
>>>>>>> >   - currently set to     8192
>>>>>>> >   Block device           253:1
>>>>>>> >
>>>>>>> >   --- Logical volume ---
>>>>>>> >   LV Path                /dev/centos/home
>>>>>>> >   LV Name                home
>>>>>>> >   VG Name                centos
>>>>>>> >   LV UUID                lCXJ7v-jeOC-DFKI-unXa-HUKx-9DXp-nmzSMg
>>>>>>> >   LV Write Access        read/write
>>>>>>> >   LV Creation host, time localhost, 2018-01-22 09:50:39 -0800
>>>>>>> >   LV Status              available
>>>>>>> >   # open                 1
>>>>>>> >   LV Size                964.67 GiB
>>>>>>> >   Current LE             246956
>>>>>>> >   Segments               1
>>>>>>> >   Allocation             inherit
>>>>>>> >   Read ahead sectors     auto
>>>>>>> >   - currently set to     8192
>>>>>>> >   Block device           253:2
>>>>>>> >
>>>>>>> >
>>>>>>> > > On Mar 20, 2018, at 4:51 PM, Remo Mattei <r...@italy1.com> wrote:
>>>>>>> > >
>>>>>>> > > I think you need to provide a bit of additional info. Did you look 
>>>>>>> > > at the logs? What version of os are you running? Etc.
>>>>>>> > >
>>>>>>> > > Inviato da iPhone
>>>>>>> > >
>>>>>>> > >> Il giorno 20 mar 2018, alle ore 16:15, Father Vlasie 
>>>>>>> > >> <fv@spots.school> ha scritto:
>>>>>>> > >>
>>>>>>> > >> Hello everyone,
>>>>>>> > >>
>>>>>>> > >> I am in need of help with my Cinder volumes which have all become 
>>>>>>> > >> unavailable.
>>>>>>> > >>
>>>>>>> > >> Is there anyone who would be willing to log in to my system and 
>>>>>>> > >> have a look?
>>>>>>> > >>
>>>>>>> > >> My cinder volumes are listed as "NOT available" and my attempts to 
>>>>>>> > >> mount them have been in vain. I have tried: vgchange -a y
>>>>>>> > >>
>>>>>>> > >> with result showing as:  0 logical volume(s) in volume group 
>>>>>>> > >> "cinder-volumes" now active
>>>>>>> > >>
>>>>>>> > >> I am a bit desperate because some of the data is critical and, I 
>>>>>>> > >> am ashamed to say, I do not have a backup.
>>>>>>> > >>
>>>>>>> > >> Any help or suggestions would be very much appreciated.
>>>>>>> > >>
>>>>>>> > >> FV
>>>>>>> > >> _______________________________________________
>>>>>>> > >> Mailing list: 
>>>>>>> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>>>> > >> Post to     : openstack@lists.openstack.org
>>>>>>> > >> Unsubscribe : 
>>>>>>> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>>>> > >
>>>>>>> >
>>>>>>> >
>>>>>>> > _______________________________________________
>>>>>>> > Mailing list: 
>>>>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>>>> > Post to     : openstack@lists.openstack.org
>>>>>>> > Unsubscribe : 
>>>>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>>>> 
>>>>>> 
>>>> 
>> 


_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to