http://ceph.com/docs/master/rados/operations/placement-groups/


2013/12/27 German Anders <gand...@despegar.com>

>  Hi to All,
>
>    I've the following warning message (WARN) in my cluster:
>
> ceph@ceph-node04:~$ sudo ceph status
>     cluster 50ae3778-dfe3-4492-9628-54a8918ede92
>    *  health HEALTH_WARN too few pgs per osd (3 < min 20)*
>      monmap e1: 1 mons at {ceph-node01=10.1.1.151:6789/0}, election epoch
> 2, quorum 0 ceph-node01
>      osdmap e259: 55 osds: 55 up, 55 in
>       pgmap v703: 192 pgs, 3 pools, 0 bytes data, 0 objects
>             2579 MB used, 7328 GB / 7331 GB avail
>                  192 active+clean
>
> ceph@ceph-node04:~$
>
>
> It's a new cluster setup, the OSD tree is the following:
>
> ceph@ceph-node04:~$ sudo ceph osd tree
> # id    weight    type name    up/down    reweight
> -1    7.27    root default
> -2    1.15        host ceph-node01
> 12    0.06999            osd.12    up    1
> 13    0.06999            osd.13    up    1
> 14    0.06999            osd.14    up    1
> 15    0.06999            osd.15    up    1
> 16    0.06999            osd.16    up    1
> 17    0.06999            osd.17    up    1
> 18    0.06999            osd.18    up    1
> 19    0.06999            osd.19    up    1
> 20    0.06999            osd.20    up    1
> 21    0.45            osd.21    up    1
> 22    0.06999            osd.22    up    1
> -3    1.53        host ceph-node02
> 23    0.06999            osd.23    up    1
> 24    0.06999            osd.24    up    1
> 25    0.06999            osd.25    up    1
> 26    0.06999            osd.26    up    1
> 27    0.06999            osd.27    up    1
> 28    0.06999            osd.28    up    1
> 29    0.06999            osd.29    up    1
> 30    0.06999            osd.30    up    1
> 31    0.06999            osd.31    up    1
> 32    0.45            osd.32    up    1
> 33    0.45            osd.33    up    1
> -4    1.53        host ceph-node03
> 34    0.06999            osd.34    up    1
> 35    0.06999            osd.35    up    1
> 36    0.06999            osd.36    up    1
> 37    0.06999            osd.37    up    1
> 38    0.06999            osd.38    up    1
> 39    0.06999            osd.39    up    1
> 40    0.06999            osd.40    up    1
> 41    0.06999            osd.41    up    1
> 42    0.06999            osd.42    up    1
> 43    0.45            osd.43    up    1
> 44    0.45            osd.44    up    1
> -5    1.53        host ceph-node04
> 0    0.06999            osd.0    up    1
> 1    0.06999            osd.1    up    1
> 2    0.06999            osd.2    up    1
> 3    0.06999            osd.3    up    1
> 4    0.06999            osd.4    up    1
> 5    0.06999            osd.5    up    1
> 6    0.06999            osd.6    up    1
> 7    0.06999            osd.7    up    1
> 8    0.06999            osd.8    up    1
> 9    0.45            osd.9    up    1
> 10    0.45            osd.10    up    1
> -6    1.53        host ceph-node05
> 11    0.06999            osd.11    up    1
> 45    0.06999            osd.45    up    1
> 46    0.06999            osd.46    up    1
> 47    0.06999            osd.47    up    1
> 48    0.06999            osd.48    up    1
> 49    0.06999            osd.49    up    1
> 50    0.06999            osd.50    up    1
> 51    0.06999            osd.51    up    1
> 52    0.06999            osd.52    up    1
> 53    0.45            osd.53    up    1
> 54    0.45            osd.54    up    1
>
> ceph@ceph-node04:~$
>
> Someone could give me a hand to resolved this situation.
>
>
> *German Anders*
>
>
>
>
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to