ssage:
> From: Stephen Horton
> Date: October 3, 2016 at 6:45:49 PM CDT
> To: "ceph-users@lists.ceph.com"
> Subject: cephfs kernel driver - failing to respond to cache pressure
> Reply-To: Stephen Horton
>
> I am using Ceph to back Openstack Nova ephemeral, C
ct 4, 2016 at 5:09 PM, Stephen Horton wrote:
>>> Thank you John. Both my Openstack hosts and the VMs are all running
>>> 4.4.0-38-generic #57-Ubuntu SMP x86_64. I can see no evidence that any of
>>> the VMs are holding large numbers of files open. If this is likely a cli
eworthy fixes between 4.4 and latest
> kernel that might be relevant?
>
> John
>
>
>>
>>>> On Oct 4, 2016, at 9:39 AM, John Spray wrote:
>>>>
>>>> On Tue, Oct 4, 2016 at 4:27 PM, Stephen Horton wrote:
>>>> Adding that all of
t 4, 2016, at 9:39 AM, John Spray wrote:
>
>> On Tue, Oct 4, 2016 at 4:27 PM, Stephen Horton wrote:
>> Adding that all of my ceph components are version:
>> 10.2.2-0ubuntu0.16.04.2
>>
>> Openstack is Mitaka on Ubuntu 16.04x. Manila file share is 1:2.0.0-0ubuntu1
&
Adding that all of my ceph components are version:
10.2.2-0ubuntu0.16.04.2
Openstack is Mitaka on Ubuntu 16.04x. Manila file share is 1:2.0.0-0ubuntu1
My scenario is that I have a 3-node ceph cluster running openstack mitaka. Each
node has 256gb ram, 14tb raid 5 array. I have 30 VMs running in o
I am using Ceph to back Openstack Nova ephemeral, Cinder volumes, Glance
images, and Openstack Manila File Share storage. Originally, I was using
ceph-fuse with Manila, but performance and resource usage was poor, so I
changed to using the CephFs kernel driver. Now however, I am getting messages