On Wed, Oct 29, 2014 at 1:37 PM, Haomai Wang wrote:
> maybe you can run it directly with debug_osd=20/20 and get ending logs
> ceph-osd -i 1 -c /etc/ceph/ceph.conf -f
>
> On Wed, Oct 29, 2014 at 6:34 PM, Andrey Korolyov wrote:
>> On Wed, Oct 29, 2014 at 1:28 PM, Haomai Wang wrote:
>>> Thanks!
>>
maybe you can run it directly with debug_osd=20/20 and get ending logs
ceph-osd -i 1 -c /etc/ceph/ceph.conf -f
On Wed, Oct 29, 2014 at 6:34 PM, Andrey Korolyov wrote:
> On Wed, Oct 29, 2014 at 1:28 PM, Haomai Wang wrote:
>> Thanks!
>>
>> You mean osd.1 exited abrptly without ceph callback trace?
On Wed, Oct 29, 2014 at 1:28 PM, Haomai Wang wrote:
> Thanks!
>
> You mean osd.1 exited abrptly without ceph callback trace?
> Anyone has some ideas about this log? @sage @gregory
>
>
> On Wed, Oct 29, 2014 at 6:19 PM, Andrey Korolyov wrote:
>> On Wed, Oct 29, 2014 at 1:11 PM, Haomai Wang wrote:
Thanks!
You mean osd.1 exited abrptly without ceph callback trace?
Anyone has some ideas about this log? @sage @gregory
On Wed, Oct 29, 2014 at 6:19 PM, Andrey Korolyov wrote:
> On Wed, Oct 29, 2014 at 1:11 PM, Haomai Wang wrote:
>> Thanks for Andrey,
>>
>> The attachment OSD.1's log is only t
On Wed, Oct 29, 2014 at 1:11 PM, Haomai Wang wrote:
> Thanks for Andrey,
>
> The attachment OSD.1's log is only these lines? I really can't find
> the detail infos from it?
>
> Maybe you need to improve debug_osd to 20/20?
>
> On Wed, Oct 29, 2014 at 5:25 PM, Andrey Korolyov wrote:
>> Hi Haomai,
Thanks for Andrey,
The attachment OSD.1's log is only these lines? I really can't find
the detail infos from it?
Maybe you need to improve debug_osd to 20/20?
On Wed, Oct 29, 2014 at 5:25 PM, Andrey Korolyov wrote:
> Hi Haomai, all.
>
> Today after unexpected power failure one of kv stores (pla
Hi Haomai, all.
Today after unexpected power failure one of kv stores (placed on ext4
with default mount options) refused to work. I think that it may be
interesting to revive it because it is almost first time among
hundreds of power failures (and their simulations) when data store got
broken.
S
I reported that problem a couple of weeks ago
From: ceph-users<mailto:ceph-users-boun...@lists.ceph.com>
Date: 2014-10-26 17:46
To: Haomai Wang<mailto:haomaiw...@gmail.com>
CC: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Continuous
On Sun, Oct 26, 2014 at 7:40 AM, Haomai Wang wrote:
> On Sun, Oct 26, 2014 at 3:12 AM, Andrey Korolyov wrote:
>> Thanks Haomai. Turns out that the master` recovery is too buggy right
>> now (recovery speed degrades over a time, OSD (non-kv) is going out of
>> cluster with no reason, misplaced obj
On Sun, Oct 26, 2014 at 3:12 AM, Andrey Korolyov wrote:
> Thanks Haomai. Turns out that the master` recovery is too buggy right
> now (recovery speed degrades over a time, OSD (non-kv) is going out of
> cluster with no reason, misplaced object calculation is wrong and so
> on), so I am sticking to
Thanks Haomai. Turns out that the master` recovery is too buggy right
now (recovery speed degrades over a time, OSD (non-kv) is going out of
cluster with no reason, misplaced object calculation is wrong and so
on), so I am sticking to giant with rocksdb now. So far no major
problems are revealed.
_
It's not stable at Firely for kvstore. But for the master branch, it's
should be no existing/known bug.
On Fri, Oct 24, 2014 at 7:41 PM, Andrey Korolyov wrote:
> Hi,
>
> during recovery testing on a latest firefly with leveldb backend we
> found that the OSDs on a selected host may crash at once,
Hi,
during recovery testing on a latest firefly with leveldb backend we
found that the OSDs on a selected host may crash at once, leaving
attached backtrace. In other ways, recovery goes more or less smoothly
for hours.
Timestamps shows how the issue is correlated between different
processes on s
13 matches
Mail list logo