Hi,
in our production cluster (proxmox 5.4, ceph 12.2) there is an issue
since yesterday. after an increase of a pool 5 OSDs do not start,
status is "down/in", ceph health: HEALTH_WARN nodown,noout flag(s) set,
5 osds down, 128 osds: 123 up, 128 in.
last lines of OSD-logfile:
2020-06-26 08:40:26.
Hi at all,
we use proxmoxcluster (v8.2.8) with ceph (v18.2.4) an EC-pools (all
ceph options on default). One pool is exported with CephFS as backend
storage for nextcloud servers.
At the moment there is data migration from old S3 storage to CephFS
pool. There are many files with huge filename leng