On Thu, 24 Sep 2015, Alexander Yang wrote:
> I use 'ceph osd crush dump | tail -n 20' get :
>
> "type": 1,
> "min_size": 1,
> "max_size": 10,
> "steps": [
> { "op": "take",
> "item": -62,
> "item_na
Hello Alexander,
One other point on your email.. You indicate you desire each OSD to have
~100 PGs, but depending on your pool size, it seems you may have forgetting
about the additional PGs associated with replication itself.
Assuming 3x replication in your environment:
70,000 * 3
On Wed, 23 Sep 2015, Alexander Yang wrote:
> hello,
> We use Ceph+Openstack in our private cloud. In our cluster, we have
> 5 mons and 800 osds, the Capacity is about 1Pb. And run about 700 vms and
> 1100 volumes,
> recently, we increase our pg_num , now the cluster have about 7