oblem, see "Set the right GPT
type GUIDs on OSD and journal partitions for udev automount rules" at
https://www.mirantis.com/openstack-portal/external-tutorials/ceph-mirantis-openstack-full-transcript/
It would be great to fix this problem for good.
Thanks,
- Fredy
From: F
v problem.
Thanks,
- Fredy
From: Somnath Roy
To: Fredy Neeser , "ceph-users@lists.ceph.com"
Date: 07/07/2015 07:49 PM
Subject:RE: [ceph-users] Ceph OSDs are down and cannot be started
Run :
'ceph-osd -i 0 -f' in a console and see what is
Hi,
I had a working Ceph Hammer test setup with 3 OSDs and 1 MON (running on
VMs), and RBD was working fine.
The setup was not touched for two weeks (also no I/O activity), and when I
looked again, the cluster was in a bad state:
On the MON node (sto-vm20):
$ ceph health
HEALTH_WARN 72 pgs stal