Thank Chengwei Yang.

2016-07-29 17:17 GMT+07:00 Chengwei Yang <chengwei.yang...@gmail.com>:

> Would http://ceph.com/pgcalc/ help?
>
> On Mon, Jul 18, 2016 at 01:27:38PM +0700, Khang Nguyễn Nhật wrote:
> > Hi all,
> > I have a cluster consists of: 3 Monitors, 1 RGW, 1 host of 24
> OSDs(2TB/OSD) and
> > some pool as:
> >     ap-southeast.rgw.data.root
> >     ap-southeast.rgw.control
> >     ap-southeast.rgw.gc
> >     ap-southeast.rgw.log
> >     ap-southeast.rgw.intent-log
> >     ap-southeast.rgw.usage
> >     ap-southeast.rgw.users.keys
> >     ap-southeast.rgw.users.email
> >     ap-southeast.rgw.users.swift
> >     ap-southeast.rgw.users.uid
> >     ap-southeast.rgw.buckets.index
> >     ap-southeast.rgw.buckets.data
> >     ap-southeast.rgw.buckets.non-ec
> >     ap-southeast.rgw.meta
> > In which "ap-southeast.rgw.buckets.data" is a erasure pool(k=20, m=4)
> and all
> > of the remaining pool are replicated(size=3). I've used (100*OSDs)/size
>  to
> > calculate the number of PGs, e.g. 100*24/3 = 800(nearest power of 2:
> 1024) for
> > replicated pools and 100*24/24=100(nearest power of 2: 128) for erasure
> pool.
> > I'm not sure this is the best placement group number, someone can give
> me some
> > advice ?
> > Thank !
> > SECURITY NOTE: file ~/.netrc must not be accessible by others
>
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> --
> Thanks,
> Chengwei
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to