Hi Gregory,


           Our cause we have 6TB data and replica 2 and its around 12TB size 
occupied, still i have remaining 4TB even though it says this error.



 51 active+undersized+degraded+remapped+backfill_toofull 



Regards

Prabu GJ





---- On Mon, 29 Aug 2016 23:44:12 +0530 Gregory Farnum 
<gfar...@redhat.com>wrote ---- 




On Mon, Aug 29, 2016 at 12:53 AM, Christian Balzer <ch...@gol.com> wrote:

> On Mon, 29 Aug 2016 12:51:55 +0530 gjprabu wrote:

>

>> Hi Chrishtian,

>>

>>

>>

>> Sorry for subject and thanks for your reply,

>>

>>

>>

>> > That's incredibly small in terms of OSD numbers, how many 
hosts? What replication size?

>>

>> Total host 5.

>>

>> Replicated size : 2

>>

> At this replication size you need to act and replace/add OSDs NOW.

> The next OSD failure will result in data loss.

>

> So your RAW space is about 16TB, leaving you with 8TB of usable space.

>

> Which doesn't mesh with your "df", showing the ceph FS with 11TB used...



When you run df against a CephFS mount, it generally reports the same

data as you get out of RADOS — so if you have replica 2 and 4 TB of

data, it will report as 8TB used (since, after all, you have used

8TB!). There are exceptions in a few cases; you can have it based off

of your quotas for subtree mounts for one.

-Greg

_______________________________________________

ceph-users mailing list

ceph-users@lists.ceph.com

http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com






_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to