Re: [ceph-users] PG Scaling

2014-03-14 Thread Karol Kozubal
http://ceph.com/docs/master/rados/operations/placement-groups/ Its provided in the example calculation on that page. Karol On 2014-03-14, 10:37 AM, "Christian Kauhaus" wrote: >Am 12.03.2014 18:54, schrieb McNamara, Bradley: >> Round up your pg_num and pgp_num to the next power of 2, 2048. >

Re: [ceph-users] PG Scaling

2014-03-14 Thread Christian Kauhaus
Am 12.03.2014 18:54, schrieb McNamara, Bradley: > Round up your pg_num and pgp_num to the next power of 2, 2048. I'm wondering where the "power of two" rule comes from. I can't find it in the documentation. Moreover, the example at http://ceph.com/docs/master/rados/configuration/pool-pg-config-ref

Re: [ceph-users] PG Scaling

2014-03-12 Thread Karol Kozubal
create it. Brad From:ceph-users-boun...@lists.ceph.com<mailto:ceph-users-boun...@lists.ceph.com> [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Karol Kozubal Sent: Wednesday, March 12, 2014 9:08 AM To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com> Subject: R

Re: [ceph-users] PG Scaling

2014-03-12 Thread McNamara, Bradley
and recreate it. Brad From: ceph-users-boun...@lists.ceph.com<mailto:ceph-users-boun...@lists.ceph.com> [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Karol Kozubal Sent: Wednesday, March 12, 2014 9:08 AM To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com> Sub

Re: [ceph-users] PG Scaling

2014-03-12 Thread Karol Kozubal
pool and recreate it. Brad From: ceph-users-boun...@lists.ceph.com<mailto:ceph-users-boun...@lists.ceph.com> [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Karol Kozubal Sent: Wednesday, March 12, 2014 9:08 AM To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com> Su

Re: [ceph-users] PG Scaling

2014-03-12 Thread McNamara, Bradley
and recreate it. Brad From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Karol Kozubal Sent: Wednesday, March 12, 2014 9:08 AM To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] PG Scaling Correction: Sorry min_size is at 1 everywhere. T

Re: [ceph-users] PG Scaling

2014-03-12 Thread Karol Kozubal
Correction: Sorry min_size is at 1 everywhere. Thank you. Karol Kozubal From: Karol Kozubal mailto:karol.kozu...@elits.com>> Date: Wednesday, March 12, 2014 at 12:06 PM To: "ceph-users@lists.ceph.com" mailto:ceph-users@lists.ceph.com>> Subject: PG Scaling Hi

[ceph-users] PG Scaling

2014-03-12 Thread Karol Kozubal
Hi Everyone, I am deploying an openstack deployment with Fuel 4.1 and have a 20 node ceph deployment of c6220’s with 3 osd’s and 1 journaling disk per node. When first deployed each storage pool is configured with the correct size and min_size attributes however fuel doesn’t seem to apply the c