2017-06-12 16:10 GMT+02:00 David Turner <drakonst...@gmail.com>:

> I have an incredibly light-weight cephfs configuration.  I set up an MDS
> on each mon (3 total), and have 9TB of data in cephfs.  This data only has
> 1 client that reads a few files at a time.  I haven't noticed any downtime
> when it fails over to a standby MDS.  So it definitely depends on your
> workload as to how a failover will affect your environment.
>
> On Mon, Jun 12, 2017 at 9:59 AM John Petrini <jpetr...@coredial.com>
> wrote:
>
>> We use the following in our ceph.conf for MDS failover. We're running one
>> active and one standby. Last time it failed over there was about 2 minutes
>> of downtime before the mounts started responding again but it did recover
>> gracefully.
>>
>> [mds]
>> max_mds = 1
>> mds_standby_for_rank = 0
>> mds_standby_replay = true
>>
>> ___
>>
>> John Petrini
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>

Thanks to both.
Just now i'm working on that because I needs a very fast failover. For now
the tests give me a very fast response when an OSD fails (about 5 seconds),
but a very slow response when the main MDS fails (I've not tested the real
time, but was not working for a long time). Maybe was because I created the
other MDS after mount, because I've done some test just before send this
email and now looks very fast (i've not noticed the downtime).

Greetings!!


-- 
_________________________________________

      Daniel Carrasco Marín
      Ingeniería para la Innovación i2TIC, S.L.
      Tlf:  +34 911 12 32 84 Ext: 223
      www.i2tic.com
_________________________________________
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to