Le 04/07/2013 01:07, Vladislav Gorbunov a écrit :
ceph osd pool set data pg_num 1800
And I do not understand why the OSD 16 and 19 are hardly used
Actually you need to change the pgp_num for real data rebalancing:
ceph osd pool set data pgp_num 1800
Check it with the command:
ceph osd dump | gr
>ceph osd pool set data pg_num 1800
>And I do not understand why the OSD 16 and 19 are hardly used
Actually you need to change the pgp_num for real data rebalancing:
ceph osd pool set data pgp_num 1800
Check it with the command:
ceph osd dump | grep 'pgp_num'
2013/7/3 Pierre BLONDEAU :
> Le 01/07
On Wed, Jul 3, 2013 at 2:12 AM, Pierre BLONDEAU
wrote:
> Hy,
>
> Thank you very much for your answer. Sorry for the late reply but a
> modification of a cluster of 67T is long ;)
>
> Actually my pg number was very insufficient :
>
> ceph osd pool get data pg_num
> pg_num: 48
>
> As I'm not sure of
Did you also set the pgp_num, as I understand it the newly created pg's aren't
considered for placement until you increase the pgp_num aka effective pg number.
Sent from my iPad
On Jul 3, 2013, at 11:54 AM, Pierre BLONDEAU wrote:
> Le 03/07/2013 11:12, Pierre BLONDEAU a écrit :
>> Le 01/07/201
Le 03/07/2013 11:12, Pierre BLONDEAU a écrit :
Le 01/07/2013 19:17, Gregory Farnum a écrit :
On Mon, Jul 1, 2013 at 10:13 AM, Alex Bligh wrote:
On 1 Jul 2013, at 17:37, Gregory Farnum wrote:
Oh, that's out of date! PG splitting is supported in Cuttlefish:
"ceph osd pool set pg_num "
http:/
Le 01/07/2013 19:17, Gregory Farnum a écrit :
On Mon, Jul 1, 2013 at 10:13 AM, Alex Bligh wrote:
On 1 Jul 2013, at 17:37, Gregory Farnum wrote:
Oh, that's out of date! PG splitting is supported in Cuttlefish:
"ceph osd pool set pg_num "
http://ceph.com/docs/master/rados/operations/control/#
On Mon, Jul 1, 2013 at 10:13 AM, Alex Bligh wrote:
>
> On 1 Jul 2013, at 17:37, Gregory Farnum wrote:
>
>> Oh, that's out of date! PG splitting is supported in Cuttlefish:
>> "ceph osd pool set pg_num "
>> http://ceph.com/docs/master/rados/operations/control/#osd-subsystem
>
> Ah, so:
> pg_num:
On 1 Jul 2013, at 17:37, Gregory Farnum wrote:
> Oh, that's out of date! PG splitting is supported in Cuttlefish:
> "ceph osd pool set pg_num "
> http://ceph.com/docs/master/rados/operations/control/#osd-subsystem
Ah, so:
pg_num: The placement group number.
means
pg_num: The number of place
On Mon, Jul 1, 2013 at 9:15 AM, Alex Bligh wrote:
>
> On 1 Jul 2013, at 17:02, Gregory Farnum wrote:
>
>> It looks like probably your PG counts are too low for the number of
>> OSDs you have.
>> http://ceph.com/docs/master/rados/operations/placement-groups/
>
> The docs you referred Pierre to say:
On 1 Jul 2013, at 17:02, Gregory Farnum wrote:
> It looks like probably your PG counts are too low for the number of
> OSDs you have.
> http://ceph.com/docs/master/rados/operations/placement-groups/
The docs you referred Pierre to say:
"Important Increasing the number of placement groups in a p
On Mon, Jul 1, 2013 at 8:49 AM, Pierre BLONDEAU
wrote:
> Hy,
>
> I use the 0.64.1 version of CEPH on debian wheezy for serveur and ubuntu
> precise ( with raring kernel 3.8.0-25 ) as client.
>
> My problem is the distribution of data on the cluster. I have 3 servers each
> with 6 osd, but the dist
11 matches
Mail list logo