Hi All,



       We are new with cephfs and we have 5 OSD and each size has 3.3TB. As of 
now data has been stored around 12 TB size, unfortunately osd5 went down and 
while remapped+backfill below error is showing even though we have around 2TB 
free spaces. Kindly provide the solution to solve this issue.



     cluster b466e09c-f7ae-4e89-99a7-99d30eba0a13

     health HEALTH_WARN

            51 pgs backfill_toofull

            57 pgs degraded

            57 pgs stuck unclean

            57 pgs undersized

            recovery 1139974/17932626 objects degraded (6.357%)

            recovery 1139974/17932626 objects misplaced (6.357%)

            3 near full osd(s)

     monmap e2: 3 mons at 
{intcfs-mon1=192.168.113.113:6789/0,intcfs-mon2=192.168.113.114:6789/0,intcfs-mon3=192.168.113.72:6789/0}

            election epoch 10, quorum 0,1,2 intcfs-mon3,intcfs-mon1,intcfs-mon2

      fsmap e26: 1/1/1 up {0=intcfs-osd1=up:active}, 1 up:standby

     osdmap e2349: 5 osds: 4 up, 4 in; 57 remapped pgs

            flags sortbitwise

      pgmap v681178: 564 pgs, 3 pools, 5811 GB data, 8756 kobjects

            11001 GB used, 2393 GB / 13394 GB avail

            1139974/17932626 objects degraded (6.357%)

            1139974/17932626 objects misplaced (6.357%)

                 506 active+clean

                  51 active+undersized+degraded+remapped+backfill_toofull

                   6 active+undersized+degraded+remapped

                   1 active+clean+scrubbing





  192.168.113.113,192.168.113.114,192.168.113.72:6789:/ ceph       14T   11T  
2.4T  83% /home/build/cephfsdownloads





Regards

Prabu GJ


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to