Re: [ceph-users] PGs per OSD guidance

2017-07-19 Thread David Turner
aul wrote: > > Anyone able to offer any advice on this? > > Cheers, > Adrian > > > > -Original Message- > > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > > Adrian Saul > > Sent: Friday, 14 July 2017 6:05 PM > >

Re: [ceph-users] PGs per OSD guidance

2017-07-19 Thread Adrian Saul
Anyone able to offer any advice on this? Cheers, Adrian > -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Adrian Saul > Sent: Friday, 14 July 2017 6:05 PM > To: 'ceph-users@lists.ceph.com' > Subject: [ceph

[ceph-users] PGs per OSD guidance

2017-07-14 Thread Adrian Saul
Hi All, I have been reviewing the sizing of our PGs with a view to some intermittent performance issues. When we have scrubs running, even when only a few are, we can sometimes get severe impacts on the performance of RBD images, enough to start causing VMs to appear stalled or unresponsive.

Re: [ceph-users] pgs per OSD

2015-11-05 Thread Oleksandr Natalenko
(128*2+256*2+256*14+256*5)/15 =~ 375. On Thursday, November 05, 2015 10:21:00 PM Deneau, Tom wrote: > I have the following 4 pools: > > pool 1 'rep2host' replicated size 2 min_size 1 crush_ruleset 1 object_hash > rjenkins pg_num 128 pgp_num 128 last_change 88 flags hashpspool > stripe_width 0 poo

[ceph-users] pgs per OSD

2015-11-05 Thread Deneau, Tom
I have the following 4 pools: pool 1 'rep2host' replicated size 2 min_size 1 crush_ruleset 1 object_hash rjenkins pg_num 128 pgp_num 128 last_change 88 flags hashpspool stripe_width 0 pool 17 'rep2osd' replicated size 2 min_size 1 crush_ruleset 1 object_hash rjenkins pg_num 256 pgp_num 256 last_