i think you may increase mds_bal_fragment_size_max, default is 10
> > On Oct 4, 2016, at 10:30 AM, John Spray wrote:
> >
> >> On Tue, Oct 4, 2016 at 5:09 PM, Stephen Horton
> wrote:
> >> Thank you John. Both my Openstack hosts and the VMs are all running
> 4.4.0-38-generic #57-Ubuntu SMP x
Hello Zheng
This is my initial email containing ceph -s and session ls info. I will send
cache dump shortly. Note that per John's suggestion, I have upgraded the
offending clients to 4.8 kernel, so my cache dump will be current with these
new clients.
Thanks,
Stephen
Begin forwarded message:
Clients are almost all idle, very little load on the cluster. I can see no
errors or warnings in the client logs when the file share is unmounted. Thx!
> On Oct 4, 2016, at 10:31 PM, Yan, Zheng wrote:
>
>> On Tue, Oct 4, 2016 at 11:30 PM, John Spray wrote:
>>> On Tue, Oct 4, 2016 at 5:09 PM, S
On Tue, Oct 4, 2016 at 11:30 PM, John Spray wrote:
> On Tue, Oct 4, 2016 at 5:09 PM, Stephen Horton wrote:
>> Thank you John. Both my Openstack hosts and the VMs are all running
>> 4.4.0-38-generic #57-Ubuntu SMP x86_64. I can see no evidence that any of
>> the VMs are holding large numbers of
Thanks again John. I am installing 4.8.0-040800 kernel on my VM clients and
will report back. Just to confirm: there is no reason for this issue to try the
newer kernel on the mds node correct?
> On Oct 4, 2016, at 10:30 AM, John Spray wrote:
>
>> On Tue, Oct 4, 2016 at 5:09 PM, Stephen Horton
On Tue, Oct 4, 2016 at 5:09 PM, Stephen Horton wrote:
> Thank you John. Both my Openstack hosts and the VMs are all running
> 4.4.0-38-generic #57-Ubuntu SMP x86_64. I can see no evidence that any of the
> VMs are holding large numbers of files open. If this is likely a client bug,
> is there s
Thank you John. Both my Openstack hosts and the VMs are all running
4.4.0-38-generic #57-Ubuntu SMP x86_64. I can see no evidence that any of the
VMs are holding large numbers of files open. If this is likely a client bug, is
there some process I can follow to file a bug report?
> On Oct 4, 201
On Tue, Oct 4, 2016 at 4:27 PM, Stephen Horton wrote:
> Adding that all of my ceph components are version:
> 10.2.2-0ubuntu0.16.04.2
>
> Openstack is Mitaka on Ubuntu 16.04x. Manila file share is 1:2.0.0-0ubuntu1
>
> My scenario is that I have a 3-node ceph cluster running openstack mitaka.
> Eac
Adding that all of my ceph components are version:
10.2.2-0ubuntu0.16.04.2
Openstack is Mitaka on Ubuntu 16.04x. Manila file share is 1:2.0.0-0ubuntu1
My scenario is that I have a 3-node ceph cluster running openstack mitaka. Each
node has 256gb ram, 14tb raid 5 array. I have 30 VMs running in o
I am using Ceph to back Openstack Nova ephemeral, Cinder volumes, Glance
images, and Openstack Manila File Share storage. Originally, I was using
ceph-fuse with Manila, but performance and resource usage was poor, so I
changed to using the CephFs kernel driver. Now however, I am getting messages
10 matches
Mail list logo