Re: [ceph-users] Problem with data distribution

2013-07-04 Thread Pierre BLONDEAU
Le 04/07/2013 01:07, Vladislav Gorbunov a écrit : ceph osd pool set data pg_num 1800 And I do not understand why the OSD 16 and 19 are hardly used Actually you need to change the pgp_num for real data rebalancing: ceph osd pool set data pgp_num 1800 Check it with the command: ceph osd dump | gr

Re: [ceph-users] Problem with data distribution

2013-07-03 Thread Vladislav Gorbunov
>ceph osd pool set data pg_num 1800 >And I do not understand why the OSD 16 and 19 are hardly used Actually you need to change the pgp_num for real data rebalancing: ceph osd pool set data pgp_num 1800 Check it with the command: ceph osd dump | grep 'pgp_num' 2013/7/3 Pierre BLONDEAU : > Le 01/07

Re: [ceph-users] Problem with data distribution

2013-07-03 Thread Gregory Farnum
On Wed, Jul 3, 2013 at 2:12 AM, Pierre BLONDEAU wrote: > Hy, > > Thank you very much for your answer. Sorry for the late reply but a > modification of a cluster of 67T is long ;) > > Actually my pg number was very insufficient : > > ceph osd pool get data pg_num > pg_num: 48 > > As I'm not sure of

Re: [ceph-users] Problem with data distribution

2013-07-03 Thread Michael Lowe
Did you also set the pgp_num, as I understand it the newly created pg's aren't considered for placement until you increase the pgp_num aka effective pg number. Sent from my iPad On Jul 3, 2013, at 11:54 AM, Pierre BLONDEAU wrote: > Le 03/07/2013 11:12, Pierre BLONDEAU a écrit : >> Le 01/07/201

Re: [ceph-users] Problem with data distribution

2013-07-03 Thread Pierre BLONDEAU
Le 03/07/2013 11:12, Pierre BLONDEAU a écrit : Le 01/07/2013 19:17, Gregory Farnum a écrit : On Mon, Jul 1, 2013 at 10:13 AM, Alex Bligh wrote: On 1 Jul 2013, at 17:37, Gregory Farnum wrote: Oh, that's out of date! PG splitting is supported in Cuttlefish: "ceph osd pool set pg_num " http:/

Re: [ceph-users] Problem with data distribution

2013-07-03 Thread Pierre BLONDEAU
Le 01/07/2013 19:17, Gregory Farnum a écrit : On Mon, Jul 1, 2013 at 10:13 AM, Alex Bligh wrote: On 1 Jul 2013, at 17:37, Gregory Farnum wrote: Oh, that's out of date! PG splitting is supported in Cuttlefish: "ceph osd pool set pg_num " http://ceph.com/docs/master/rados/operations/control/#

Re: [ceph-users] Problem with data distribution

2013-07-01 Thread Gregory Farnum
On Mon, Jul 1, 2013 at 10:13 AM, Alex Bligh wrote: > > On 1 Jul 2013, at 17:37, Gregory Farnum wrote: > >> Oh, that's out of date! PG splitting is supported in Cuttlefish: >> "ceph osd pool set pg_num " >> http://ceph.com/docs/master/rados/operations/control/#osd-subsystem > > Ah, so: > pg_num:

Re: [ceph-users] Problem with data distribution

2013-07-01 Thread Alex Bligh
On 1 Jul 2013, at 17:37, Gregory Farnum wrote: > Oh, that's out of date! PG splitting is supported in Cuttlefish: > "ceph osd pool set pg_num " > http://ceph.com/docs/master/rados/operations/control/#osd-subsystem Ah, so: pg_num: The placement group number. means pg_num: The number of place

Re: [ceph-users] Problem with data distribution

2013-07-01 Thread Gregory Farnum
On Mon, Jul 1, 2013 at 9:15 AM, Alex Bligh wrote: > > On 1 Jul 2013, at 17:02, Gregory Farnum wrote: > >> It looks like probably your PG counts are too low for the number of >> OSDs you have. >> http://ceph.com/docs/master/rados/operations/placement-groups/ > > The docs you referred Pierre to say:

Re: [ceph-users] Problem with data distribution

2013-07-01 Thread Alex Bligh
On 1 Jul 2013, at 17:02, Gregory Farnum wrote: > It looks like probably your PG counts are too low for the number of > OSDs you have. > http://ceph.com/docs/master/rados/operations/placement-groups/ The docs you referred Pierre to say: "Important Increasing the number of placement groups in a p

Re: [ceph-users] Problem with data distribution

2013-07-01 Thread Gregory Farnum
On Mon, Jul 1, 2013 at 8:49 AM, Pierre BLONDEAU wrote: > Hy, > > I use the 0.64.1 version of CEPH on debian wheezy for serveur and ubuntu > precise ( with raring kernel 3.8.0-25 ) as client. > > My problem is the distribution of data on the cluster. I have 3 servers each > with 6 osd, but the dist