Re: [ceph-users] VM Data corruption shortly after Luminous Upgrade

2017-11-08 Thread James Forde
...@redhat.com] Sent: Wednesday, November 8, 2017 9:53 AM To: James Forde Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] VM Data corruption shortly after Luminous Upgrade Are your QEMU VMs using a different CephX user than client.admin? If so, can you double-check your caps to ensure that the

Re: [ceph-users] VM Data corruption shortly after Luminous Upgrade

2017-11-08 Thread Jason Dillaman
Are your QEMU VMs using a different CephX user than client.admin? If so, can you double-check your caps to ensure that the QEMU user can blacklist? See step 6 in the upgrade instructions [1]. The fact that "rbd resize" fixed something hints that your VMs had hard-crashed with the exclusive lock lef

Re: [ceph-users] VM Data corruption shortly after Luminous Upgrade

2017-11-08 Thread James Forde
Title probably should have read "Ceph Data corruption shortly after Luminous Upgrade" Problem seems to have been sorted out. Still not sure why original problem other than Upgrade latency?, or mgr errors? After I resolved the boot problem I attempted to reproduce error, but was unsuccessful whi

[ceph-users] VM Data corruption shortly after Luminous Upgrade

2017-11-06 Thread James Forde
Weird but Very bad problem with my test cluster 2-3 weeks after upgrading to Luminous. All 7 running VM's are corrupted and unbootable. 6 Windows and 1 CentOS7. Windows error is "unmountable boot volume". CentOS7 will only boot to emergency mode. 3 VM's that were off during event work as expecte