I would also recommend keeping each pool at base 2 numbers of PGs.  So with
the 512 PGs example, do 512 PGs for the data pool and 64 PGs for the
metadata pool.

On Sat, Jul 1, 2017 at 9:01 AM Wido den Hollander <w...@42on.com> wrote:

>
> > Op 1 juli 2017 om 1:04 schreef Tu Holmes <tu.hol...@gmail.com>:
> >
> >
> > I would use the calculator at ceph and just set for "all in one".
> >
> > http://ceph.com/pgcalc/
> >
>
> I wouldn't do that. With CephFS the data pool(s) will contain much more
> objects and data then the metadata pool.
>
> You can easily have 1024 PGs for the metadata pool and 8192 for the data
> pool for example.
>
> With the example of 512 PGs in total I'd assign 64 to the metadata pool
> and the rest to the data pool.
>
> Wido
>
> >
> > On Fri, Jun 30, 2017 at 6:45 AM Riccardo Murri <riccardo.mu...@gmail.com
> >
> > wrote:
> >
> > > Hello!
> > >
> > > Are there any recommendations for how many PGs to allocate to a CephFS
> > > meta-data pool?
> > >
> > > Assuming a simple case of a cluster with 512 PGs, to be distributed
> > > across the FS data and metadata pools, how would you make the split?
> > >
> > > Thanks,
> > > Riccardo
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to