gt; wrote:
> >>>>
> >>>> Try grep in cs1 and cs3 could be a disk space issue.
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> Regards,
> >>>>
> >>>> Sta
p in cs1 and cs3 could be a disk space issue.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Regards,
>>>>
>>>> Stanislav Yanchev
>>>> Core System Administrator
>>>>
>>>>
>>>&g
June 2014 11:00, Stanislav Yanchev
>>> wrote:
>>>
>>>> Try grep in cs1 and cs3 could be a disk space issue.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Regards,
>>>>
>>>> *
t; Regards,
>>>
>>> *Stanislav Yanchev*
>>> Core System Administrator
>>>
>>> [image: MAX TELECOM]
>>>
>>> Mobile: +359 882 549 441
>>> s.yanc...@maxtelecom.bg
>>> www.maxtelecom.bg
>>>
>>>
>&
; s.yanc...@maxtelecom.bg
>> www.maxtelecom.bg
>>
>>
>> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
>> Of *Andrija Panic
>> *Sent:* Tuesday, June 17, 2014 11:57 AM
>> *To:* Christian Balzer
>> *Cc:* ceph-users@lists.ceph.co
half
> Of *Andrija Panic
> *Sent:* Tuesday, June 17, 2014 11:57 AM
> *To:* Christian Balzer
> *Cc:* ceph-users@lists.ceph.com
> *Subject:* Re: [ceph-users] Cluster status reported wrongly as
> HEALTH_WARN
>
>
>
> Hi Christian,
>
>
>
> that seems true, t
: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Andrija Panic
Sent: Tuesday, June 17, 2014 11:57 AM
To: Christian Balzer
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Cluster status reported wrongly as HEALTH_WARN
Hi Christian,
that seems true, thanks.
But again, th
Hi Christian,
that seems true, thanks.
But again, there are only occurence in GZ logs files (that were logrotated,
not in current log files):
Example:
[root@cs2 ~]# grep -ir "WRN" /var/log/ceph/
Binary file /var/log/ceph/ceph-mon.cs2.log-20140612.gz matches
Binary file /var/log/ceph/ceph.log-201
Hello,
On Tue, 17 Jun 2014 10:30:44 +0200 Andrija Panic wrote:
> Hi,
>
> I have 3 node (2 OSD per node) CEPH cluster, running fine, not much data,
> network also fine:
> Ceph ceph-0.72.2.
>
> When I issue "ceph status" command, I get randomly HEALTH_OK, and
> imidiately after that when repeati
Hi,
I have 3 node (2 OSD per node) CEPH cluster, running fine, not much data,
network also fine:
Ceph ceph-0.72.2.
When I issue "ceph status" command, I get randomly HEALTH_OK, and
imidiately after that when repeating command, I get HEALTH_WARN
Examle given down - these commands were issues with
10 matches
Mail list logo