Re: [ceph-users] Status of CephFS

2016-04-13 Thread Andrus, Brian Contractor
:06 AM To: Christian Balzer Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Status of CephFS > On 13 Apr 2016, at 10:55, Christian Balzer wrote: > > On Wed, 13 Apr 2016 11:51:08 +0300 Oleksandr Natalenko wrote: > >> 13.04.2016 11:31, Vincenzo Pii wrote: >>> T

Re: [ceph-users] Status of CephFS

2016-04-13 Thread Oleksandr Natalenko
Any direct experience with CephFS? Haven't tried anything newer than Hammer, but in Hammer CephFS is unable to back-press very active clients. For example, rsyncing lots of files to Ceph mount could result in MDS log overflow and OSD slow requests, especially if MDS log in located on SSD and

Re: [ceph-users] Status of CephFS

2016-04-13 Thread Vincenzo Pii
> On 13 Apr 2016, at 10:55, Christian Balzer wrote: > > On Wed, 13 Apr 2016 11:51:08 +0300 Oleksandr Natalenko wrote: > >> 13.04.2016 11:31, Vincenzo Pii wrote: >>> The setup would include five nodes, two monitors and three OSDs, so >>> data would be redundant (we would add the MDS for CephFS,

Re: [ceph-users] Status of CephFS

2016-04-13 Thread Christian Balzer
On Wed, 13 Apr 2016 11:51:08 +0300 Oleksandr Natalenko wrote: > 13.04.2016 11:31, Vincenzo Pii wrote: > > The setup would include five nodes, two monitors and three OSDs, so > > data would be redundant (we would add the MDS for CephFS, of course). > > You need uneven number of mons. In your case

Re: [ceph-users] Status of CephFS

2016-04-13 Thread Oleksandr Natalenko
13.04.2016 11:31, Vincenzo Pii wrote: The setup would include five nodes, two monitors and three OSDs, so data would be redundant (we would add the MDS for CephFS, of course). You need uneven number of mons. In your case I would setup mons on all 5 nodes, or at least on 3 of them. ___

[ceph-users] Status of CephFS

2016-04-13 Thread Vincenzo Pii
Hi All, We would like to deploy Ceph and we would need to use CephFS internally, but of course without any compromise on data durability. The setup would include five nodes, two monitors and three OSDs, so data would be redundant (we would add the MDS for CephFS, of course). I would like to unde