Thank you for your reply. I am running Pike on CentOS 7.
I have pasted some logs below that seem to contain errors. Are there any particular logs that I should look at? [root@plato ~]# tail /var/log/cinder/volume.log 2018-03-20 17:03:27.963 2572 ERROR cinder.service [-] Manager for service cinder-volume plato.spots.onsite@lvm is reporting problems, not sending heartbeat. Service will appear "down". 2018-03-20 17:03:37.964 2572 ERROR cinder.service [-] Manager for service cinder-volume plato.spots.onsite@lvm is reporting problems, not sending heartbeat. Service will appear "down". 2018-03-20 17:03:47.965 2572 ERROR cinder.service [-] Manager for service cinder-volume plato.spots.onsite@lvm is reporting problems, not sending heartbeat. Service will appear "down". 2018-03-20 17:03:57.966 2572 ERROR cinder.service [-] Manager for service cinder-volume plato.spots.onsite@lvm is reporting problems, not sending heartbeat. Service will appear "down". 2018-03-20 17:04:07.967 2572 ERROR cinder.service [-] Manager for service cinder-volume plato.spots.onsite@lvm is reporting problems, not sending heartbeat. Service will appear "down". 2018-03-20 17:04:17.969 2572 ERROR cinder.service [-] Manager for service cinder-volume plato.spots.onsite@lvm is reporting problems, not sending heartbeat. Service will appear "down". 2018-03-20 17:04:18.081 2572 WARNING cinder.volume.manager [req-b519dd31-2a7a-4188-835a-1a6d3ea9b7b0 - - - - -] Update driver status failed: (config name lvm) is uninitialized. 2018-03-20 17:04:27.970 2572 ERROR cinder.service [-] Manager for service cinder-volume plato.spots.onsite@lvm is reporting problems, not sending heartbeat. Service will appear "down". 2018-03-20 17:04:37.971 2572 ERROR cinder.service [-] Manager for service cinder-volume plato.spots.onsite@lvm is reporting problems, not sending heartbeat. Service will appear "down". 2018-03-20 17:04:47.973 2572 ERROR cinder.service [-] Manager for service cinder-volume plato.spots.onsite@lvm is reporting problems, not sending heartbeat. Service will appear "down”. [root@plato ~]# tail /var/log/cinder/api.log 2018-03-20 10:51:01.395 2568 INFO eventlet.wsgi.server [req-dea5af45-a051-4f17-b931-e7f121cdeb59 ced549e6e1b345be889e11b1c16cf6d9 e1ceb67d89314c01add05a0086772df3 - default default] 192.168.3.11 "GET /v2/e1ceb67d89314c01add05a0086772df3/volumes/ee107488-2559-4116-aa7b-0da02fd5f693 HTTP/1.1" status: 200 len: 1852 time: 0.1106622 2018-03-20 16:41:49.682 2570 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option. 2018-03-20 16:41:50.295 2570 INFO cinder.api.openstack.wsgi [req-1262bfa8-eb42-41f3-b481-0408d9ee95e3 ced549e6e1b345be889e11b1c16cf6d9 e1ceb67d89314c01add05a0086772df3 - default default] GET http://192.168.3.11:8776/v2/e1ceb67d89314c01add05a0086772df3/limits 2018-03-20 16:41:50.413 2570 WARNING cinder.quota [req-1262bfa8-eb42-41f3-b481-0408d9ee95e3 ced549e6e1b345be889e11b1c16cf6d9 e1ceb67d89314c01add05a0086772df3 - default default] Deprecated: Default quota for resource: snapshots_iscsi is set by the default quota flag: quota_snapshots_iscsi, it is now deprecated. Please use the default quota class for default quota. 2018-03-20 16:41:50.414 2570 WARNING cinder.quota [req-1262bfa8-eb42-41f3-b481-0408d9ee95e3 ced549e6e1b345be889e11b1c16cf6d9 e1ceb67d89314c01add05a0086772df3 - default default] Deprecated: Default quota for resource: backup_gigabytes is set by the default quota flag: quota_backup_gigabytes, it is now deprecated. Please use the default quota class for default quota. 2018-03-20 16:41:50.414 2570 WARNING cinder.quota [req-1262bfa8-eb42-41f3-b481-0408d9ee95e3 ced549e6e1b345be889e11b1c16cf6d9 e1ceb67d89314c01add05a0086772df3 - default default] Deprecated: Default quota for resource: volumes_iscsi is set by the default quota flag: quota_volumes_iscsi, it is now deprecated. Please use the default quota class for default quota. 2018-03-20 16:41:50.415 2570 WARNING cinder.quota [req-1262bfa8-eb42-41f3-b481-0408d9ee95e3 ced549e6e1b345be889e11b1c16cf6d9 e1ceb67d89314c01add05a0086772df3 - default default] Deprecated: Default quota for resource: backups is set by the default quota flag: quota_backups, it is now deprecated. Please use the default quota class for default quota. 2018-03-20 16:41:50.416 2570 WARNING cinder.quota [req-1262bfa8-eb42-41f3-b481-0408d9ee95e3 ced549e6e1b345be889e11b1c16cf6d9 e1ceb67d89314c01add05a0086772df3 - default default] Deprecated: Default quota for resource: gigabytes_iscsi is set by the default quota flag: quota_gigabytes_iscsi, it is now deprecated. Please use the default quota class for default quota. 2018-03-20 16:41:50.442 2570 INFO cinder.api.openstack.wsgi [req-1262bfa8-eb42-41f3-b481-0408d9ee95e3 ced549e6e1b345be889e11b1c16cf6d9 e1ceb67d89314c01add05a0086772df3 - default default] http://192.168.3.11:8776/v2/e1ceb67d89314c01add05a0086772df3/limits returned with HTTP 200 2018-03-20 16:41:50.443 2570 INFO eventlet.wsgi.server [req-1262bfa8-eb42-41f3-b481-0408d9ee95e3 ced549e6e1b345be889e11b1c16cf6d9 e1ceb67d89314c01add05a0086772df3 - default default] 192.168.3.11 "GET /v2/e1ceb67d89314c01add05a0086772df3/limits HTTP/1.1" status: 200 len: 570 time: 0.7621832 > On Mar 20, 2018, at 4:51 PM, Remo Mattei <[email protected]> wrote: > > I think you need to provide a bit of additional info. Did you look at the > logs? What version of os are you running? Etc. > > Inviato da iPhone > >> Il giorno 20 mar 2018, alle ore 16:15, Father Vlasie <[email protected]> ha >> scritto: >> >> Hello everyone, >> >> I am in need of help with my Cinder volumes which have all become >> unavailable. >> >> Is there anyone who would be willing to log in to my system and have a look? >> >> My cinder volumes are listed as "NOT available" and my attempts to mount >> them have been in vain. I have tried: vgchange -a y >> >> with result showing as: 0 logical volume(s) in volume group >> "cinder-volumes" now active >> >> I am a bit desperate because some of the data is critical and, I am ashamed >> to say, I do not have a backup. >> >> Any help or suggestions would be very much appreciated. >> >> FV >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : [email protected] >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : [email protected] Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
