Hi,
We're using ceph 10.2.5 and cephfs.
We had a weird monitor (mon0r0) which had some sort of meltdown as current
active mds node.
The monitor node called elections on/off over ~1 hour, sometimes with
5-10min between.
On every occasion mds was also doing a replay, reconnect, rejoin => active
(
On 29-04-17 00:16, Gregory Farnum wrote:
> On Tue, Apr 4, 2017 at 2:49 AM, Jens Rosenboom wrote:
>> On a busy cluster, I'm seeing a couple of OSDs logging millions of
>> lines like this:
>>
>> 2017-04-04 06:35:18.240136 7f40ff873700 0
>> cls/log/cls_log.cc:129: storing entry at
>> 1_1491287718.2
LRC low level plugin configuration of the following example copes with a single
erasure while it can easily protect from two.
In case I use the layers:
1: DDc_ _
2: DDD_ _ _ _c_
3: _ _ _DDD_ _c
Neither of the rules protect from 2 failures. However, if we calculate the XOR
of the two local p
Hi Matan,
On 04/29/2017 10:47 PM, Matan Liram wrote:
> LRC low level plugin configuration of the following example copes with a
> single erasure while it can easily protect from two.
>
> In case I use the layers:
> 1: DDc_ _
> 2: DDD_ _ _ _c_
> 3: _ _ _DDD_ _c
>
> Neither of the rules protect
A few months ago, I posted here asking why the Ceph program takes so much
memory (virtual, real, and address space) for what seems to be a simple task.
Nobody knew, but I have done extensive research and I have the answer now, and
thought I would publish it here.
All it takes to do a Ceph "status"
How interesting! Thank you for that.
On Sat, Apr 29, 2017 at 4:04 PM Bryan Henderson
wrote:
> A few months ago, I posted here asking why the Ceph program takes so much
> memory (virtual, real, and address space) for what seems to be a simple
> task.
> Nobody knew, but I have done extensive resea
Hi,
I did some basic experiments with mysql and measured the time taken by a
set of operations on CephFS and RBD. The RBD measurements are taken on a
1GB RBD disk with ext4 filesystem. Following are my observation. The
time listed below are in seconds.
*Plain file system* *CephFS