I would check to see if the images have an exclusive-lock still held
by a force-killed VM. librbd will generally automatically clear this
up unless it doesn't have the proper permissions to blacklist a dead
client from the Ceph cluster. Verify that your OpenStack Ceph user
caps are correct [1][2].

[1] 
http://docs.ceph.com/docs/master/releases/luminous/#upgrade-from-jewel-or-kraken
[2] 
http://docs.ceph.com/docs/luminous/rbd/rbd-openstack/#setup-ceph-client-authentication
On Tue, Dec 4, 2018 at 8:56 AM Simon Ironside <sirons...@caffetine.org> wrote:
>
> On 04/12/2018 09:37, linghucongsong wrote:
>
> But it is just in case suddenly power off for all the hosts!
>
>
> I'm surprised you're seeing I/O errors inside the VM once they're restarted.
> Is the cluster healthy? What's the output of ceph status?
>
> Simon
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to