:06 AM
To: Christian Balzer
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Status of CephFS
> On 13 Apr 2016, at 10:55, Christian Balzer wrote:
>
> On Wed, 13 Apr 2016 11:51:08 +0300 Oleksandr Natalenko wrote:
>
>> 13.04.2016 11:31, Vincenzo Pii wrote:
>>> T
Any direct experience with CephFS?
Haven't tried anything newer than Hammer, but in Hammer CephFS is unable
to back-press very active clients. For example, rsyncing lots of files
to Ceph mount could result in MDS log overflow and OSD slow requests,
especially if MDS log in located on SSD and
> On 13 Apr 2016, at 10:55, Christian Balzer wrote:
>
> On Wed, 13 Apr 2016 11:51:08 +0300 Oleksandr Natalenko wrote:
>
>> 13.04.2016 11:31, Vincenzo Pii wrote:
>>> The setup would include five nodes, two monitors and three OSDs, so
>>> data would be redundant (we would add the MDS for CephFS,
On Wed, 13 Apr 2016 11:51:08 +0300 Oleksandr Natalenko wrote:
> 13.04.2016 11:31, Vincenzo Pii wrote:
> > The setup would include five nodes, two monitors and three OSDs, so
> > data would be redundant (we would add the MDS for CephFS, of course).
>
> You need uneven number of mons. In your case
13.04.2016 11:31, Vincenzo Pii wrote:
The setup would include five nodes, two monitors and three OSDs, so
data would be redundant (we would add the MDS for CephFS, of course).
You need uneven number of mons. In your case I would setup mons on all 5
nodes, or at least on 3 of them.
___
Hi All,
We would like to deploy Ceph and we would need to use CephFS internally,
but of course without any compromise on data durability.
The setup would include five nodes, two monitors and three OSDs, so data
would be redundant (we would add the MDS for CephFS, of course).
I would like to unde