Hi,
Increasing PG number for pools that hold data might help if you didn't do
that already.
Check out this thread:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-January/027094.html
You might find some tips there (although it was pre firefly).
W dniu 28.06.2014 o 14:44 Jianing Yang <jianingy.y...@gmail.com> pisze:
Hi, all
My cluster has been running for about 4 month now. I have about 108
osds and all are 600G SAS Disk. Their disk usage is between 70% and 85%.
It seems that ceph cannot distribute data evenly by default settings. Is
there any configuration that helps distribute data more evenly?
Thanks very much
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Konrad Gutkowski
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com