Re: [ceph-users] PGs per OSD guidance

2017-07-19 Thread David Turner
Here are a few thoughts. The more PGs, the higher memory requirement for the osd process. If you are having problems with scrubs causing problems with customer io, check some of the io priority settings that received a big overhaul with Jewel and again with 10.2.9. The more PGs you have, the smalle

Re: [ceph-users] PGs per OSD guidance

2017-07-19 Thread Adrian Saul
Anyone able to offer any advice on this? Cheers, Adrian > -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Adrian Saul > Sent: Friday, 14 July 2017 6:05 PM > To: 'ceph-users@lists.ceph.com' > Subject: [ceph-users] PGs per OSD guidance > >

Re: [ceph-users] pgs per OSD

2015-11-05 Thread Oleksandr Natalenko
(128*2+256*2+256*14+256*5)/15 =~ 375. On Thursday, November 05, 2015 10:21:00 PM Deneau, Tom wrote: > I have the following 4 pools: > > pool 1 'rep2host' replicated size 2 min_size 1 crush_ruleset 1 object_hash > rjenkins pg_num 128 pgp_num 128 last_change 88 flags hashpspool > stripe_width 0 poo