Hi,
Le 12/07/2016 02:51, Brad Hubbard a écrit :
> [...]
This is probably a fragmentation problem : typical rbd access patterns
cause heavy BTRFS fragmentation.
>>> To the extent that operations take over 120 seconds to complete? Really?
>> Yes, really. I had these too. By default Ceph/R
On Mon, Jul 11, 2016 at 04:53:36PM +0200, Lionel Bouton wrote:
> Le 11/07/2016 11:56, Brad Hubbard a écrit :
> > On Mon, Jul 11, 2016 at 7:18 PM, Lionel Bouton
> > wrote:
> >> Le 11/07/2016 04:48, 한승진 a écrit :
> >>> Hi cephers.
> >>>
> >>> I need your help for some issues.
> >>>
> >>> The ceph cl
Le 11/07/2016 11:56, Brad Hubbard a écrit :
> On Mon, Jul 11, 2016 at 7:18 PM, Lionel Bouton
> wrote:
>> Le 11/07/2016 04:48, 한승진 a écrit :
>>> Hi cephers.
>>>
>>> I need your help for some issues.
>>>
>>> The ceph cluster version is Jewel(10.2.1), and the filesytem is btrfs.
>>>
>>> I run 1 Mon a
On Mon, Jul 11, 2016 at 7:18 PM, Lionel Bouton
wrote:
> Le 11/07/2016 04:48, 한승진 a écrit :
>> Hi cephers.
>>
>> I need your help for some issues.
>>
>> The ceph cluster version is Jewel(10.2.1), and the filesytem is btrfs.
>>
>> I run 1 Mon and 48 OSD in 4 Nodes(each node has 12 OSDs).
>>
>> I've
Le 11/07/2016 04:48, 한승진 a écrit :
> Hi cephers.
>
> I need your help for some issues.
>
> The ceph cluster version is Jewel(10.2.1), and the filesytem is btrfs.
>
> I run 1 Mon and 48 OSD in 4 Nodes(each node has 12 OSDs).
>
> I've experienced one of OSDs was killed himself.
>
> Always it issued s
On Mon, Jul 11, 2016 at 1:21 PM, Brad Hubbard wrote:
> On Mon, Jul 11, 2016 at 11:48:57AM +0900, 한승진 wrote:
>> Hi cephers.
>>
>> I need your help for some issues.
>>
>> The ceph cluster version is Jewel(10.2.1), and the filesytem is btrfs.
>>
>> I run 1 Mon and 48 OSD in 4 Nodes(each node has 12 O
On Mon, Jul 11, 2016 at 11:48:57AM +0900, 한승진 wrote:
> Hi cephers.
>
> I need your help for some issues.
>
> The ceph cluster version is Jewel(10.2.1), and the filesytem is btrfs.
>
> I run 1 Mon and 48 OSD in 4 Nodes(each node has 12 OSDs).
>
> I've experienced one of OSDs was killed himself.
Hi cephers.
I need your help for some issues.
The ceph cluster version is Jewel(10.2.1), and the filesytem is btrfs.
I run 1 Mon and 48 OSD in 4 Nodes(each node has 12 OSDs).
I've experienced one of OSDs was killed himself.
Always it issued suicide timeout message.
Below is detailed logs.
=