I've also seen something similar with Luminous once on broken OSDs reporting nonsense stats that overflowed some variables and reporting 10000000% full.
In my case it was Bluestore OSDs running on too tiny VMs. Paul 2018-06-20 17:41 GMT+02:00 Raju Rangoju <ra...@chelsio.com>: > Hi, > > > > Recently I have upgraded my ceph cluster to version 14.0.0 - nautilus(dev) > from ceph version 13.0.1, after this, I noticed some weird data usage > numbers on the cluster. > > Here are the issues I’m seeing… > > 1. The data usage reported is much more than what is available > > usage: 16 EiB used, 164 TiB / 158 TiB avail > > > > before this upgradation, it used to report properly > > usage: 1.10T used, 157T / 158T avail > > > > 1. it reports that all the osds/pool are full > > > > Can someone please shed some light? Any helps is greatly appreciated. > > > > [root@hadoop1 my-ceph]# ceph --version > > ceph version 14.0.0-480-g6c1e8ee (6c1e8ee14f9b25dc96684dbc1f8c8255c47f0bb9) > nautilus (dev) > > > > [root@hadoop1 my-ceph]# ceph -s > > cluster: > > id: ee4660fd-167b-42e6-b27b-126526dab04d > > health: HEALTH_ERR > > 87 full osd(s) > > 11 pool(s) full > > > > services: > > mon: 3 daemons, quorum hadoop1,hadoop4,hadoop6 > > mgr: hadoop6(active), standbys: hadoop1, hadoop4 > > mds: cephfs-1/1/1 up {0=hadoop3=up:creating}, 2 up:standby > > osd: 88 osds: 87 up, 87 in > > > > data: > > pools: 11 pools, 32588 pgs > > objects: 0 objects, 0 B > > usage: 16 EiB used, 164 TiB / 158 TiB avail > > pgs: 32588 active+clean > > > > Thanks in advance > > -Raj > > > > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com