If pool1 is going to be much smaller than pool2, you may want more PGs in
pool2 for better distribution of data.




On Wed, Aug 8, 2018 at 12:40 AM, Sébastien VIGNERON <
sebastien.vigne...@criann.fr> wrote:

> The formula seems correct for a 100 pg/OSD target.
>
>
> > Le 8 août 2018 à 04:21, Satish Patel <satish....@gmail.com> a écrit :
> >
> > Thanks!
> >
> > Do you have any comments on Question: 1 ?
> >
> > On Tue, Aug 7, 2018 at 10:59 AM, Sébastien VIGNERON
> > <sebastien.vigne...@criann.fr> wrote:
> >> Question 2:
> >>
> >> ceph osd pool set-quota <poolname> max_objects|max_bytes <val>
>                                                     set object or byte
> limit on pool
> >>
> >>
> >>> Le 7 août 2018 à 16:50, Satish Patel <satish....@gmail.com> a écrit :
> >>>
> >>> Folks,
> >>>
> >>> I am little confused so just need clarification, I have 14 osd in my
> >>> cluster and i want to create two pool  (pool-1 & pool-2) how do i
> >>> device pg between two pool with replication 3
> >>>
> >>> Question: 1
> >>>
> >>> Is this correct formula?
> >>>
> >>> 14 * 100 / 3 / 2 =  233  ( power of 2 would be 256)
> >>>
> >>> So should i give 256 PG per pool right?
> >>>
> >>> pool-1 = 256 pg & pgp
> >>> poo-2 = 256 pg & pgp
> >>>
> >>>
> >>> Question: 2
> >>>
> >>> How do i set limit on pool for example if i want pool-1 can only use
> >>> 500GB and pool-2 can use rest of the space?
> >>> _______________________________________________
> >>> ceph-users mailing list
> >>> ceph-users@lists.ceph.com
> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to