Hi together,
for those reading along: We had to turn off all OSDs keeping our cephfs-data
pool during the intervention, luckily everything came back fine.
However, we managed to leave the MDS's and OSDs keeping the cephfs-metadata
pool and the MONs online. We restarted those sequentially afterw
Hi together,
it seems the issue described by Ansgar was reported and closed here as being
fixed for newly created pools in post-Luminous releases:
https://tracker.ceph.com/issues/41336
However, it is unclear to me:
- How to find out if an EC cephfs you have created in Luminous is actually affec
We got our OSD's back
Since we removed the EC-Pool (cephfs.data) we had to figure out how to
remove the PG from teh Offline OSD and hier is how we did it.
remove cehfs, remove cache layer, remove pools:
#ceph mds fail 0
#ceph fs rm cephfs --yes-i-really-mean-it
#ceph osd tier remove-overlay cephf
another update,
we now took the more destructive route and removed the cephfs pools
(lucky we had only test date in the filesystem)
Our hope was that within the startup-process the osd will delete the
no longer needed PG, But this is NOT the Case.
So we are still have the same issue the only diff
Hi,
as a follow-up:
* a full log of one OSD failing to start https://pastebin.com/T8UQ2rZ6
* our ec-pool cration in the fist place https://pastebin.com/20cC06Jn
* ceph osd dump and ceph osd erasure-code-profile get cephfs
https://pastebin.com/TRLPaWcH
as we try to dig more into it, it looks like
hi folks,
we had to move one of our clusters so we had to boot all servers, now
we found an Error on all OSD with the EC-Pool.
do we miss some opitons, will an upgrade to 13.2.6 help?
Thanks,
Ansgar
2019-08-06 12:10:16.265 7fb337b83200 -1
/build/ceph-13.2.4/src/osd/ECUtil.h: In function
'ECUti