Re: [ceph-users] Uneven OSD usage

2014-09-03 Thread Craig Lewis
ceph osd reweight-by-utilization is ok to use, as long as it's tempory. I've used it while waiting for new hardware to arrive. It adjusts the weight displayed in ceph osd tree, but not the weight used in the crushmap. Yeah, there are two different weights for an OSD. Leave the crushmap weight a

Re: [ceph-users] Uneven OSD usage

2014-08-30 Thread Christian Balzer
Hello, On Sat, 30 Aug 2014 18:27:22 -0400 J David wrote: > On Fri, Aug 29, 2014 at 2:53 AM, Christian Balzer wrote: > >> Now, 1200 is not a power of two, but it makes sense. (12 x 100). > > Should have been 600 and then upped to 1024. > > At the time, there was a reason why doing that did not

Re: [ceph-users] Uneven OSD usage

2014-08-30 Thread J David
On Fri, Aug 29, 2014 at 2:53 AM, Christian Balzer wrote: >> Now, 1200 is not a power of two, but it makes sense. (12 x 100). > Should have been 600 and then upped to 1024. At the time, there was a reason why doing that did not work, but I don't remember the specifics. All messages sent back in

Re: [ceph-users] Uneven OSD usage

2014-08-28 Thread Christian Balzer
Hello, On Fri, 29 Aug 2014 02:32:39 -0400 J David wrote: > On Thu, Aug 28, 2014 at 10:47 PM, Christian Balzer wrote: > >> There are 1328 PG's in the pool, so about 110 per OSD. > >> > > And just to be pedantic, the PGP_NUM is the same? > > Ah, "ceph status" reports 1328 pgs. But: > > $ sudo

Re: [ceph-users] Uneven OSD usage

2014-08-28 Thread J David
On Thu, Aug 28, 2014 at 10:47 PM, Christian Balzer wrote: >> There are 1328 PG's in the pool, so about 110 per OSD. >> > And just to be pedantic, the PGP_NUM is the same? Ah, "ceph status" reports 1328 pgs. But: $ sudo ceph osd pool get rbd pg_num pg_num: 1200 $ sudo ceph osd pool get rbd pgp_n

Re: [ceph-users] Uneven OSD usage

2014-08-28 Thread Christian Balzer
Hello, On Thu, 28 Aug 2014 19:49:59 -0400 J David wrote: > On Thu, Aug 28, 2014 at 7:00 PM, Robert LeBlanc > wrote: > > How many PGs do you have in your pool? This should be about 100/OSD. > > There are 1328 PG's in the pool, so about 110 per OSD. > And just to be pedantic, the PGP_NUM is the

Re: [ceph-users] Uneven OSD usage

2014-08-28 Thread J David
On Thu, Aug 28, 2014 at 7:00 PM, Robert LeBlanc wrote: > How many PGs do you have in your pool? This should be about 100/OSD. There are 1328 PG's in the pool, so about 110 per OSD. Thanks! ___ ceph-users mailing list ceph-users@lists.ceph.com http://li

Re: [ceph-users] Uneven OSD usage

2014-08-28 Thread Robert LeBlanc
How many PGs do you have in your pool? This should be about 100/OSD. If it is too low, you could get an imbalance. I don't know the consequence of changing it on such a full cluster. The default values are only good for small test environments. Robert LeBlanc Sent from a mobile device please excu

[ceph-users] Uneven OSD usage

2014-08-28 Thread J David
Hello, Is there any way to provoke a ceph cluster to level out its OSD usage? Currently, a cluster of 3 servers with 4 identical OSDs each is showing disparity of about 20% between the most-used OSD and the least-used OSD. This wouldn't be too big of a problem, but the most-used OSD is now at 86