Re: [ceph-users] cephfs luminous 12.2.4 - multi-active MDSes with manual pinning

2018-04-24 Thread Linh Vu
Thanks Patrick! Good to know that it's nothing and will be fixed soon :) From: Patrick Donnelly Sent: Wednesday, 25 April 2018 5:17:57 AM To: Linh Vu Cc: ceph-users Subject: Re: [ceph-users] cephfs luminous 12.2.4 - multi-active MDSes with manual pinning

Re: [ceph-users] cephfs luminous 12.2.4 - multi-active MDSes with manual pinning

2018-04-24 Thread Patrick Donnelly
Hello Linh, On Tue, Apr 24, 2018 at 12:34 AM, Linh Vu wrote: > However, on our production cluster, with more powerful MDSes (10 cores > 3.4GHz, 256GB RAM, much faster networking), I get this in the logs > constantly: > > 2018-04-24 16:29:21.998261 7f02d1af9700 0 mds.1.migrator nicely exporting >

Re: [ceph-users] cephfs luminous 12.2.4 - multi-active MDSes with manual pinning

2018-04-24 Thread Linh Vu
er clients joined the warning list. Only restarting mds.0 so that the standby mds replaces it restored cluster health. Cheers, Linh From: Dan van der Ster Sent: Tuesday, 24 April 2018 6:20:18 PM To: Linh Vu Cc: ceph-users Subject: Re: [ceph-users] cephfs lumino

Re: [ceph-users] cephfs luminous 12.2.4 - multi-active MDSes with manual pinning

2018-04-24 Thread Dan van der Ster
That "nicely exporting" thing is a logging issue that was apparently fixed in https://github.com/ceph/ceph/pull/19220. I'm not sure if that will be backported to luminous. Otherwise the slow requests could be due to either slow trimming (see previous discussions about mds log max expiring and mds