> On 26 Nov 2014, at 17:26, Dan Van Der Ster <daniel.vanders...@cern.ch> wrote:
> 
> Hi,
> 
>> On 26 Nov 2014, at 17:07, Yujian Peng <pengyujian5201...@126.com> wrote:
>> 
>> 
>> Thanks a lot! 
>> IOPS is a bottleneck in my cluster and the object disks are much slower than 
>> SSDs. I don't know whether SSDs will be used as caches if 
>> filestore_max_sync_interval is set to a big value. I will set 
>> filestore_max_sync_interval to a couple of value to observe the effect.
>> 
>> If filestore_max_sync_interval is greater than 30s, how to set kernel vm 
>> dirty 
>> buffer?
>> 
> 
> In the past I was doing some test to try and completely eliminate all 
> background flushing to the OSD devices. For this, I did something like:
> 
> filestore max sync interval = 120
> filestore min sync interval = 119
> 
> vm.dirty_background_ratio = 40
> vm.dirty_background_bytes = 0
> vm.dirty_ratio = 40
> vm.dirty_bytes = 0
> vm.dirty_writeback_centisecs = 500
> vm.dirty_expire_centisecs = 3000
> 
> Given those settings, you should then run a test and check iostat -xm 1. You 
> should see writes on the journals, but no writes on the OSD devices. (If you 
> increase the debug_filestore level to 10 or 20 you can also see exactly when 
> the filestore is sync’d, and correlate that with what you see in iostat). 
> Following this test you can get an idea how the iops perform when the SSDs 
> alone are used for writes.
> 
> For production, with 120s filestore sync interval, you can probably live with 
> something like:
> 
> filestore max sync interval = 120
> filestore min sync interval = <the default>
> 
> vm.dirty_background_ratio = 10
> vm.dirty_background_bytes = 0
> vm.dirty_ratio = 40
> vm.dirty_bytes = 0
> vm.dirty_writeback_centisecs = 500
> vm.dirty_expire_centisecs = 1200

oops, that should be

vm.dirty_expire_centisecs = 12000

Cheers, Dan

> 
> Please refer to the doc to get a full understanding of how to tune those 
> values: https://www.kernel.org/doc/Documentation/sysctl/vm.txt
> 
> Cheers, Dan
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to