Le 15 oct. 2014 à 11:48, Roman <[email protected]> a écrit :

> Pascal,
> 
> Here is my latest installation:
> 
>     cluster 204986f6-f43c-4199-b093-8f5c7bc641bb
>      health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean; recovery 
> 20/40 objects degraded (50.000%)
>      monmap e1: 2 mons at 
> {ceph02=192.168.33.142:6789/0,ceph03=192.168.33.143:6789/0}, election epoch 
> 4, quorum 0,1 ceph02,ceph03
>      mdsmap e4: 1/1/1 up {0=ceph02=up:active}
>      osdmap e8: 2 osds: 2 up, 2 in
>       pgmap v14: 192 pgs, 3 pools, 1884 bytes data, 20 objects
>             68796 kB used, 6054 MB / 6121 MB avail
>             20/40 objects degraded (50.000%)
>                  192 active+degraded
> 
> 
> host ceph01 - admin
> host ceph02 - mon.ceph02 + osd.1 (sdb, 8G) + mds
> host ceph03 - mon.ceph03 + osd.0 (sdb, 8G)
> 
> $ ceph osd tree
> # id    weight  type name       up/down reweight
> -1      0       root default
> -2      0               host ceph03
> 0       0                       osd.0   up      1
> -3      0               host ceph02
> 1       0                       osd.1   up      1
> 

I'm not sure, but i think something is wrong during the installation, all 
weights are equal at all levels of your map.
See http://docs.ceph.com/docs/master/rados/operations/crush-map/

Check the rulset id #0 (all your pool uses this ruleset).
If you have something like this :

rule data {
        ruleset 0
        type replicated
        min_size 1
        max_size 10
        step take default
        step chooseleaf firstn 0 type host
        step emit
}

and if i understand the mechanism, for a good replication placement, ceph will 
need two OSDs on two different nodes. Maybe, 0 as weight is a wrong value ?

After deploying a similar configuration, i have :
# id    weight  type name       up/down reweight
-1      2       root default
-2      1               host econome-7
1       1                       osd.1   up      1       
-3      1               host econome-18
0       1                       osd.0   up      1

Pascal


> 
> $ ceph osd dump
> epoch 8
> fsid 204986f6-f43c-4199-b093-8f5c7bc641bb
> created 2014-10-15 13:39:05.986977
> modified 2014-10-15 13:40:45.644870
> flags 
> pool 0 'data' replicated size 2 min_size 1 crush_ruleset 0 object_hash 
> rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool 
> crash_replay_interval 45 stripe_width 0
> pool 1 'metadata' replicated size 2 min_size 1 crush_ruleset 0 object_hash 
> rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0
> pool 2 'rbd' replicated size 2 min_size 1 crush_ruleset 0 object_hash 
> rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0
> max_osd 2
> osd.0 up   in  weight 1 up_from 4 up_thru 4 down_at 0 last_clean_interval 
> [0,0) 192.168.33.143:6800/2284 192.168.33.143:6801/2284 
> 192.168.33.143:6802/2284 192.168.33.143:6803/2284 exists,up 
> dccd6b99-1885-4c62-864b-107bd9ba0d84
> osd.1 up   in  weight 1 up_from 8 up_thru 0 down_at 0 last_clean_interval 
> [0,0) 192.168.33.142:6800/2399 192.168.33.142:6801/2399 
> 192.168.33.142:6802/2399 192.168.33.142:6803/2399 exists,up 
> 4d4adf4b-ae8e-4e26-8667-c952c7fc4e45
> 
> Thanks,
> Roman
> 
>> Hello,
>> 
>>>     osdmap e10: 4 osds: 2 up, 2 in
>> 
>> 
>> What about following commands :
>> # ceph osd tree
>> # ceph osd dump
>> 
>> You have 2 OSDs on 2 hosts, but 4 OSDs seems to be debined in your crush map.
>> 
>> Regards,
>> 
>> Pascal
>> 
>> Le 15 oct. 2014 à 11:11, Roman <[email protected]> a écrit :
>> 
>>> Hi ALL,
>>> 
>>> I've created 2 mon and 2 osd on Centos 6.5 (x86_64).
>>> 
>>> I've tried 4 times (clean centos installation) but always have health: 
>>> HEALTH_WARN
>>> 
>>> Never HEALTH_OK always HEALTH_WARN! :(
>>> 
>>> # ceph -s
>>>    cluster d073ed20-4c0e-445e-bfb0-7b7658954874
>>>     health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean
>>>     monmap e1: 2 mons at 
>>> {ceph02=192.168.0.142:6789/0,ceph03=192.168.0.143:6789/0}, election epoch 
>>> 4, quorum 0,1 ceph02,ceph03
>>>     osdmap e10: 4 osds: 2 up, 2 in
>>>      pgmap v15: 192 pgs, 3 pools, 0 bytes data, 0 objects
>>>            68908 kB used, 6054 MB / 6121 MB avail
>>>                 192 active+degraded
>>> 
>>> What am I doing wrong???
>>> 
>>> -----------
>>> 
>>> host:  192.168.0.141 - admin
>>> host:  192.168.0.142 - mon.ceph02 + osd.0 (/dev/sdb, 8G)
>>> host:  192.168.0.143 - mon.ceph03 + osd.1 (/dev/sdb, 8G)
>>> 
>>> ceph-deploy version 1.5.18
>>> 
>>> [global]
>>> osd pool default size = 2
>>> -----------
>>> 
>>> Thanks,
>>> Roman.
>>> _______________________________________________
>>> ceph-users mailing list
>>> [email protected]
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
>> --
>> Pascal Morillon
>> University of Rennes 1
>> IRISA, Rennes, France
>> SED
>> Offices : E206 (Grid5000), D050 (SED)
>> Phone : +33 2 99 84 22 10
>> [email protected]
>> Twitter ‏@pmorillon
>> xmpp: [email protected]
>> http://www.grid5000.fr
>> 
> 

--
Pascal Morillon
University of Rennes 1
IRISA, Rennes, France
SED
Offices : E206 (Grid5000), D050 (SED)
Phone : +33 2 99 84 22 10
[email protected]
Twitter ‏@pmorillon
xmpp: [email protected]
http://www.grid5000.fr

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to