Re: [ceph-users] ceph data replication not even on every osds

2014-07-08 Thread Kaifeng Yao
It is not a crush map thing. What is the PG/OSD ratio? CEPH recommends 100-200 PG (after multiplying the replica number or EC stripe number) per OSD. But even though we also observed about 20-40% differences for PG/OSD distribution. You may try higher PG/OSD ratio but be warned that the messenge

Re: [ceph-users] pid_max value?

2014-06-13 Thread Kaifeng Yao
The thread creation depends on the OSD number per host, as well as the cluster size. You have really a lot (40!!) OSDs on a single node, but the good part is that you¹ve got a small cluster (only 4 nodes). If you already run into the problem then the only way is to increase pid_max. Remember to re