OK, actually I just know it. It looks OK.

According to the log, many osds try to boot and repeatedly. I think
the problem maybe in monitor side. Could you check the monitor node
and the ceph-mon.log which provided is blank.

On Wed, Apr 30, 2014 at 3:59 PM, Cao, Buddy <buddy....@intel.com> wrote:
> Yes, I set "osd journal size= 0 " by purpose, I'd like to use all of the 
> space of journal device, I think I got the idea from Ceph website... Yes, I 
> do run " mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/keyring.admin" to 
> start create ceph cluster, and it succeed.
>
> Do you think "osd journal size=0" would cause any problems?
>
>
> Wei Cao (Buddy)
>
> -----Original Message-----
> From: Haomai Wang [mailto:haomaiw...@gmail.com]
> Sent: Wednesday, April 30, 2014 3:48 PM
> To: Cao, Buddy
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] mkcephfs questions
>
> I found "osd journal size = 0" in your ceph.conf?
> Do you really run mkcephfs with this? I think it will be fail.
>
> On Wed, Apr 30, 2014 at 2:42 PM, Cao, Buddy <buddy....@intel.com> wrote:
>> Here you go... I did not see any stuck clean related log...
>>
>>
>>
>> Wei Cao (Buddy)
>>
>> -----Original Message-----
>> From: Haomai Wang [mailto:haomaiw...@gmail.com]
>> Sent: Wednesday, April 30, 2014 2:12 PM
>> To: Cao, Buddy
>> Cc: ceph-users@lists.ceph.com
>> Subject: Re: [ceph-users] mkcephfs questions
>>
>> Hmm, it should be another problem plays. Maybe more logs could explain it.
>>
>> ceph.log
>> ceph-mon.log
>>
>> On Wed, Apr 30, 2014 at 12:06 PM, Cao, Buddy <buddy....@intel.com> wrote:
>>> Thanks your reply, Haomai. What I don't understand is that, why the stuck 
>>> unclean pgs keep the same numbers after 12 hours. It's the common behavior 
>>> or not?
>>>
>>>
>>> Wei Cao (Buddy)
>>>
>>> -----Original Message-----
>>> From: Haomai Wang [mailto:haomaiw...@gmail.com]
>>> Sent: Wednesday, April 30, 2014 11:36 AM
>>> To: Cao, Buddy
>>> Cc: ceph-users@lists.ceph.com
>>> Subject: Re: [ceph-users] mkcephfs questions
>>>
>>> The result of "ceph -s" should tell you the reason. There only exists
>>> 21 OSD up but we need 24 OSDs
>>>
>>> On Wed, Apr 30, 2014 at 11:21 AM, Cao, Buddy <buddy....@intel.com> wrote:
>>>> Hi,
>>>>
>>>>
>>>>
>>>> I setup ceph cluster thru mkcephfs command, after I enter “ceph –s”,
>>>> it always returns 4950 stuck unclean pgs. I tried the same “ceph -s”
>>>> after 12 hrs,  there still returns the same unclean pgs number, nothing 
>>>> changed.
>>>> Does mkcephfs always has the problem or I did something wrong? I
>>>> attached the result of “ceph -s”, “ceph osd tree” and ceph.conf I
>>>> have, please kindly help.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> [root@ceph]# ceph -s
>>>>
>>>>     cluster 99fd4ff8-0fb8-47b9-8179-fefbba1c2503
>>>>
>>>>      health HEALTH_WARN 4950 pgs degraded; 4950 pgs stuck unclean;
>>>> recovery
>>>> 21/42 objects degraded (50.000%); 3/24 in osds are down; clock skew
>>>> detected on mon.1, mon.2
>>>>
>>>>      monmap e1: 3 mons at
>>>> {0=192.168.0.2:6789/0,1=192.168.0.3:6789/0,2=192.168.0.4:6789/0},
>>>> election epoch 6, quorum 0,1,2 0,1,2
>>>>
>>>>      mdsmap e4: 1/1/1 up {0=0=up:active}
>>>>
>>>>      osdmap e6019: 24 osds: 21 up, 24 in
>>>>
>>>>       pgmap v16445: 4950 pgs, 6 pools, 9470 bytes data, 21 objects
>>>>
>>>>             4900 MB used, 93118 MB / 98019 MB avail
>>>>
>>>>             21/42 objects degraded (50.000%)
>>>>
>>>>                 4950 active+degraded
>>>>
>>>>
>>>>
>>>> [root@ceph]# ceph osd tree //part of returns
>>>>
>>>> # id    weight  type name       up/down reweight
>>>>
>>>> -36     25      root vsm
>>>>
>>>> -31     3.2             storage_group ssd
>>>>
>>>> -16     3                       zone zone_a_ssd
>>>>
>>>> -1      1                               host vsm2_ssd_zone_a
>>>>
>>>> 2       1                                       osd.2   up      1
>>>>
>>>> -6      1                               host vsm3_ssd_zone_a
>>>>
>>>> 10      1                                       osd.10  up      1
>>>>
>>>> -11     1                               host vsm4_ssd_zone_a
>>>>
>>>> 18      1                                       osd.18  up      1
>>>>
>>>> -21     0.09999                 zone zone_c_ssd
>>>>
>>>> -26     0.09999                 zone zone_b_ssd
>>>>
>>>> -33     3.2             storage_group sata
>>>>
>>>> -18     3                       zone zone_a_sata
>>>>
>>>> -3      1                               host vsm2_sata_zone_a
>>>>
>>>> 1       1                                       osd.1   up      1
>>>>
>>>> -8      1                               host vsm3_sata_zone_a
>>>>
>>>> 9       1                                       osd.9   up      1
>>>>
>>>> -13     1                               host vsm4_sata_zone_a
>>>>
>>>> 17      1                                       osd.17  up      1
>>>>
>>>> -23     0.09999                 zone zone_c_sata
>>>>
>>>> -28     0.09999                 zone zone_b_sata
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Wei Cao (Buddy)
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@lists.ceph.com
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>>
>>>
>>>
>>> --
>>> Best Regards,
>>>
>>> Wheat
>>
>>
>>
>> --
>> Best Regards,
>>
>> Wheat
>
>
>
> --
> Best Regards,
>
> Wheat



-- 
Best Regards,

Wheat
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to