Hi,

   I'm facing a critical issue with my Ceph cluster. It has become unable to 
read/write data properly and cannot recover normally. What steps should I take 
to resolve this?

  [root@ceph-node1 ~]# ceph -s 

  cluster:

    id:     76956086-25f5-445d-a49e-b7824393c17b

    health: HEALTH_WARN

            1 pools have many more objects per pg than average

            102131233/124848552 objects misplaced (81.804%)

            Reduced data availability: 40 pgs inactive

            Degraded data redundancy: 6402821/124848552 objects degraded 
(5.128%), 11 pgs degraded, 31 pgs undersized

 

  services:

    mon: 3 daemons, quorum ceph-node1,ceph-node2,ceph-node3

    mgr: ceph-node2(active), standbys: ceph-node3, ceph-node1

    osd: 6 osds: 6 up, 6 in; 169 remapped pgs

 

  data:

    pools:   7 pools, 216 pgs

    objects: 41.62 M objects, 2.6 TiB

    usage:   18 TiB used, 12 TiB / 30 TiB avail

    pgs:     12.500% pgs unknown

             6.019% pgs not active

             6402821/124848552 objects degraded (5.128%)

             102131233/124848552 objects misplaced (81.804%)

             143 active+clean+remapped

             27  unknown

             15  active+undersized+remapped

             13  active+clean

             9   undersized+degraded+peered

             4   undersized+peered

             2   active+undersized+degraded

             1   active+undersized

             1   active+remapped+backfilling

             1   active+clean+remapped+scrubbing+deep

 

  io:

    recovery: 1.3 MiB/s, 20 objects/s,




[root@ceph-node1 ~]#  ceph osd df tree

ID CLASS WEIGHT   REWEIGHT SIZE    USE     DATA    OMAP    META    AVAIL   %USE 
 VAR  PGS TYPE NAME           

-1       30.00000        -  30 TiB  18 TiB  17 TiB 125 GiB 302 GiB  12 TiB 
58.88 1.00   - root default        

-3       10.00000        -  10 TiB 5.9 TiB 5.8 TiB  38 GiB 101 GiB 4.1 TiB 
59.11 1.00   -     host ceph-node1 

 0   hdd  5.00000  0.21257 5.0 TiB 2.3 TiB 2.2 TiB  25 GiB  41 GiB 2.7 TiB 
46.21 0.78  61         osd.0       

 3   hdd  5.00000  0.13638 5.0 TiB 3.6 TiB 3.5 TiB  13 GiB  60 GiB 1.4 TiB 
72.02 1.22  84         osd.3       

-5       10.00000        -  10 TiB 6.2 TiB 6.1 TiB  42 GiB 105 GiB 3.8 TiB 
62.40 1.06   -     host ceph-node2 

 1   hdd  5.00000  0.13644 5.0 TiB 3.6 TiB 3.5 TiB  11 GiB  61 GiB 1.4 TiB 
71.99 1.22  95         osd.1       

 4   hdd  5.00000  0.16666 5.0 TiB 2.6 TiB 2.6 TiB  31 GiB  44 GiB 2.4 TiB 
52.81 0.90 104         osd.4       

-7       10.00000        -  10 TiB 5.5 TiB 5.4 TiB  45 GiB  96 GiB 4.5 TiB 
55.11 0.94   -     host ceph-node3 

 2   hdd  5.00000  0.16666 5.0 TiB 3.5 TiB 3.4 TiB  22 GiB  62 GiB 1.5 TiB 
70.19 1.19  79         osd.2       

 5   hdd  5.00000  0.21664 5.0 TiB 2.0 TiB 1.9 TiB  23 GiB  34 GiB 3.0 TiB 
40.04 0.68 100         osd.5       

                     TOTAL  30 TiB  18 TiB  17 TiB 125 GiB 302 GiB  12 TiB 
58.88                              

MIN/MAX VAR: 0.68/1.22  STDDEV: 13.38



Thanks~


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to