t; >>>> 2015-11-26 08:31:49.273455 7fe4f49b1700 0 --
> >>>> 192.168.254.18:6816/110740 >> 192.168.254.12:0/1011754
> pipe(0x41fd1000
> >>>> sd=98 :6816 s=0 pgs=0 cs=0 l=1 c=0x3ee19080).accept: got bad
> >>>> authorizer
> >>
t find
>>>> secret_id=2924
>>>>
>>>> What does it mean? Google sais it might be a time sync issue, but my
>>>> clocks are perfectly synchronized...
>>>
>>>
>>> Normally you get an error warning in "ceph status" if time is o
ais it might be a time sync issue, but my
>>> clocks are perfectly synchronized...
>>
>>
>> Normally you get an error warning in "ceph status" if time is out of sync.
>> Nevertheless, you can try to restart the OSD's. I had issues with timing in
>> th
g the times, before the accepted the new timings. But this was mostly
> the case with monitors though.
>
>
>
> Regards,
>
>
> Mart
>
>
>
>
>
>> 2015-11-26 21:05 GMT+08:00 Irek Fasikhov <
>> malm...@gmail.com>:
>> > Hi.
>&
; > С уважением, Фасихов Ирек Нургаязович
> > Моб.: +79229045757
> >
> > 2015-11-26 13:16 GMT+03:00 ЦИТ РТ-Курамшин Камиль Фидаилевич
> > mailto:kamil.kurams...@tatar.ru>>:
> >>
> >> It seams that you played around with crushma
t; :
> >>
> >> It seams that you played around with crushmap, and done something wrong.
> >> Compare the look of 'ceph osd tree' and crushmap. There are some 'osd'
> >> devices renamed to 'device' think threre is you problem.
> >
is you problem.
>>
>> Отправлено с мобильного устройства.
>>
>>
>> -Original Message-
>> From: Vasiliy Angapov
>> To: ceph-users
>> Sent: чт, 26 нояб. 2015 7:53
>> Subject: [ceph-users] Undersized pgs problem
>>
>> Hi, colleag
t;
> -Original Message-
> From: Vasiliy Angapov
> To: ceph-users
> Sent: чт, 26 нояб. 2015 7:53
> Subject: [ceph-users] Undersized pgs problem
>
> Hi, colleagues!
>
> I have small 4-node CEPH cluster (0.94.2), all pools have size 3, min_size
> 1.
> This
rom: Vasiliy Angapov
To: ceph-users
Sent: чт, 26 нояб. 2015 7:53
Subject: [ceph-users] Undersized pgs problem
Hi, colleagues!
I have small 4-node CEPH cluster (0.94.2), all pools have size 3, min_size 1.
This night one host failed and cluster was unable to rebalance saying
there are a lot of u
Hi, colleagues!
I have small 4-node CEPH cluster (0.94.2), all pools have size 3, min_size 1.
This night one host failed and cluster was unable to rebalance saying
there are a lot of undersized pgs.
root@slpeah002:[~]:# ceph -s
cluster 78eef61a-3e9c-447c-a3ec-ce84c617d728
health HEALTH_W
10 matches
Mail list logo