I have seen this and some of our big customers have also seen this. I was using 
8TB HDDs and when running small tests using a fresh HDD setup, these tests 
resulted in very good performance. I then loaded the ceph cluster so each of 
the 8TB HDD used 4TB and reran the same tests. performance was cut in 1/2. This 
is using the default settings on how ceph creates the dirs and sub-directories 
on each OSD. You can flatten out this dir structure so the structure is more 
wide than deep and performance is improved. Check out the 
filestore_merge_threshold and filestore_split_multiple settings.
Rick
> On Jul 20, 2016, at 3:19 PM, Kane Kim <kane.ist...@gmail.com> wrote:
> 
> Hello,
> 
> I was running cosbench for some time and noticed sharp consistent performance 
> decrease at some point.
> 
> Image is here: http://take.ms/rorPw <http://take.ms/rorPw>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to