Hi Raju,

This is a bug in new BlueStore's bitmap allocator.

This PR will most probably fix that:

https://github.com/ceph/ceph/pull/22610


Also you may try to switch bluestore and bluefs allocators (bluestore_allocator and bluefs_allocator parameters respectively) to stupid and restart OSDs.

This should help.


Thanks,

Igor


On 6/20/2018 6:41 PM, Raju Rangoju wrote:

Hi,

Recently I have upgraded my ceph cluster to version 14.0.0 - nautilus(dev) from ceph version 13.0.1, after this, I noticed some weird data usage numbers on the cluster.

Here are the issues I’m seeing…

 1. The data usage reported is much more than what is available

usage:   16 EiB used, 164 TiB / 158 TiB avail

before this upgradation, it used to report properly

usage:   1.10T used, 157T / 158T avail

 2. it reports that all the osds/pool are full

Can someone please shed some light? Any helps is greatly appreciated.

[root@hadoop1 my-ceph]# ceph --version

ceph version 14.0.0-480-g6c1e8ee (6c1e8ee14f9b25dc96684dbc1f8c8255c47f0bb9) nautilus (dev)

[root@hadoop1 my-ceph]# ceph -s

  cluster:

    id: ee4660fd-167b-42e6-b27b-126526dab04d

    health: HEALTH_ERR

            87 full osd(s)

            11 pool(s) full

  services:

    mon: 3 daemons, quorum hadoop1,hadoop4,hadoop6

    mgr: hadoop6(active), standbys: hadoop1, hadoop4

    mds: cephfs-1/1/1 up {0=hadoop3=up:creating}, 2 up:standby

    osd: 88 osds: 87 up, 87 in

  data:

    pools:   11 pools, 32588 pgs

    objects: 0  objects, 0 B

    usage:   16 EiB used, 164 TiB / 158 TiB avail

    pgs:     32588 active+clean

Thanks in advance

-Raj



_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to