Thanks.. I'll give this a shot and we'll see what happens!
jonathan
On Tue, Jan 29, 2019 at 8:47 AM Yan, Zheng wrote:
> On Tue, Jan 29, 2019 at 9:05 PM Jonathan Woytek
> wrote:
> >
> > On Tue, Jan 29, 2019 at 7:12 AM Yan, Zheng wrote:
> >>
> >> Looks like you have 5 active mds. I suspect your
On Tue, Jan 29, 2019 at 9:05 PM Jonathan Woytek wrote:
>
> On Tue, Jan 29, 2019 at 7:12 AM Yan, Zheng wrote:
>>
>> Looks like you have 5 active mds. I suspect your issue is related to
>> load balancer. Please try disabling mds load balancer (add
>> "mds_bal_max = 0" to mds section of ceph.conf).
On Tue, Jan 29, 2019 at 7:12 AM Yan, Zheng wrote:
> Looks like you have 5 active mds. I suspect your issue is related to
> load balancer. Please try disabling mds load balancer (add
> "mds_bal_max = 0" to mds section of ceph.conf). and use 'export_pin'
> to manually pin directories to mds
> (htt
On Fri, Jan 25, 2019 at 9:49 PM Jonathan Woytek wrote:
>
> Hi friendly ceph folks. A little while after I got the message asking for
> some stats, we had a network issue that caused us to take all of our
> processing offline for a few hours. Since we brought everything back up, I
> have been un
On Thu, Jan 10, 2019 at 8:02 AM Jonathan Woytek wrote:
>
> On Wed, Jan 9, 2019 at 4:34 PM Patrick Donnelly wrote:
>>
>> Hello Jonathan,
>>
>> On Wed, Jan 9, 2019 at 5:37 AM Jonathan Woytek wrote:
>> > While working on examining performance under load at scale, I see a marked
>> > performance im
On Wed, Jan 9, 2019 at 4:34 PM Patrick Donnelly wrote:
> Hello Jonathan,
>
> On Wed, Jan 9, 2019 at 5:37 AM Jonathan Woytek wrote:
> > While working on examining performance under load at scale, I see a
> marked performance improvement whenever I would restart certain mds
> daemons. I was able t
Hello Jonathan,
On Wed, Jan 9, 2019 at 5:37 AM Jonathan Woytek wrote:
> While working on examining performance under load at scale, I see a marked
> performance improvement whenever I would restart certain mds daemons. I was
> able to duplicate the performance improvement by issuing a "daemon m
Hello ceph-users. I'm operating a moderately large ceph cluster with
cephfs. We currently have 288 osd's, made up of all 10TB drives, and are
getting ready to migrate another 432 drives into the cluster (I'm going to
have more questions on that later). Our workload is highly distributed
(containeri