The % should be based on how much of the storage you expect that pool to
take up out of total available. 256PGs with Replication 3 will distribute
themselves as 256 * 3 / 14 which will be about 54 per OSD. For the smaller
pool 16 seems too low.  You can go with 32 and 256 if you want lower number
of PGs in the vms pool and expand later. The calculator recommends 32 and
512 for your settings.

Subhachandra




On Fri, Aug 10, 2018 at 8:43 AM, Satish Patel <satish....@gmail.com> wrote:

> Folks,
>
>
> I used your link to calculate PGs and i did following.
>
> Total OSD: 14
> Replica: 3
> Total Pools: 2  ( Images & vms)  In %Data i gave 5% to images & 95% to
> vms (openstack)
>
> https://ceph.com/pgcalc/
>
> It gave me following result
>
> vms  -  512 PG
> images - 16 PG
>
> For safe side i set vms 256 PG is that a good idea because you can
> increase pg but you can't reduce PG so i want to start with smaller
> and later i have room to increase pg, i just don't want to commit
> bigger which cause other performance issue, Do you think my approach
> is right or i should set 512 now
>
> On Fri, Aug 10, 2018 at 9:23 AM, Satish Patel <satish....@gmail.com>
> wrote:
> > Re-sending it, because i found my i lost membership so wanted to make
> > sure, my email went through
> >
> > On Fri, Aug 10, 2018 at 7:07 AM, Satish Patel <satish....@gmail.com>
> wrote:
> >> Thanks,
> >>
> >> Can you explain about %Data field in that calculation, is this total
> data
> >> usage for specific pool or total ?
> >>
> >> For example
> >>
> >> Pool-1 is small so should I use 20%
> >> Pool-2 is bigger so I should use 80%
> >>
> >> I'm confused there so can you give me just example how to calculate that
> >> field?
> >>
> >> Sent from my iPhone
> >>
> >> On Aug 9, 2018, at 4:25 PM, Subhachandra Chandra <schan...@grailbio.com
> >
> >> wrote:
> >>
> >> I have used the calculator at https://ceph.com/pgcalc/ which looks at
> >> relative sizes of pools and makes a suggestion.
> >>
> >> Subhachandra
> >>
> >> On Thu, Aug 9, 2018 at 1:11 PM, Satish Patel <satish....@gmail.com>
> wrote:
> >>>
> >>> Thanks Subhachandra,
> >>>
> >>> That is good point but how do i calculate that PG based on size?
> >>>
> >>> On Thu, Aug 9, 2018 at 1:42 PM, Subhachandra Chandra
> >>> <schan...@grailbio.com> wrote:
> >>> > If pool1 is going to be much smaller than pool2, you may want more
> PGs
> >>> > in
> >>> > pool2 for better distribution of data.
> >>> >
> >>> >
> >>> >
> >>> >
> >>> > On Wed, Aug 8, 2018 at 12:40 AM, Sébastien VIGNERON
> >>> > <sebastien.vigne...@criann.fr> wrote:
> >>> >>
> >>> >> The formula seems correct for a 100 pg/OSD target.
> >>> >>
> >>> >>
> >>> >> > Le 8 août 2018 à 04:21, Satish Patel <satish....@gmail.com> a
> écrit :
> >>> >> >
> >>> >> > Thanks!
> >>> >> >
> >>> >> > Do you have any comments on Question: 1 ?
> >>> >> >
> >>> >> > On Tue, Aug 7, 2018 at 10:59 AM, Sébastien VIGNERON
> >>> >> > <sebastien.vigne...@criann.fr> wrote:
> >>> >> >> Question 2:
> >>> >> >>
> >>> >> >> ceph osd pool set-quota <poolname> max_objects|max_bytes <val>
> >>> >> >> set object or byte limit on pool
> >>> >> >>
> >>> >> >>
> >>> >> >>> Le 7 août 2018 à 16:50, Satish Patel <satish....@gmail.com> a
> écrit
> >>> >> >>> :
> >>> >> >>>
> >>> >> >>> Folks,
> >>> >> >>>
> >>> >> >>> I am little confused so just need clarification, I have 14 osd
> in
> >>> >> >>> my
> >>> >> >>> cluster and i want to create two pool  (pool-1 & pool-2) how do
> i
> >>> >> >>> device pg between two pool with replication 3
> >>> >> >>>
> >>> >> >>> Question: 1
> >>> >> >>>
> >>> >> >>> Is this correct formula?
> >>> >> >>>
> >>> >> >>> 14 * 100 / 3 / 2 =  233  ( power of 2 would be 256)
> >>> >> >>>
> >>> >> >>> So should i give 256 PG per pool right?
> >>> >> >>>
> >>> >> >>> pool-1 = 256 pg & pgp
> >>> >> >>> poo-2 = 256 pg & pgp
> >>> >> >>>
> >>> >> >>>
> >>> >> >>> Question: 2
> >>> >> >>>
> >>> >> >>> How do i set limit on pool for example if i want pool-1 can only
> >>> >> >>> use
> >>> >> >>> 500GB and pool-2 can use rest of the space?
> >>> >> >>> _______________________________________________
> >>> >> >>> ceph-users mailing list
> >>> >> >>> ceph-users@lists.ceph.com
> >>> >> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>> >> >>
> >>> >>
> >>> >> _______________________________________________
> >>> >> ceph-users mailing list
> >>> >> ceph-users@lists.ceph.com
> >>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>> >
> >>> >
> >>
> >>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to